How reliable is "science"

You've just found the penis-shaped door to freedom. GET ON YOUR FUCKING FEET. Turn the tables on your masters. Light the entire world on fire. The time for sitting there like a little bitch is OVER.
Forum rules
This section is open to the public. Feel free to post questions, criticisms or comments. Thank you.
Post Reply
Dean of Beatdowns
Posts: 10480
Joined: Sat May 15, 2010 10:34 am

Re: how reliable is "science"

Post by Info » Tue Nov 01, 2011 7:14 pm

Report: Dutch 'Lord of the Data' Forged Dozens of Studies
by Gretchen Vogel on 31 October 2011, 7:05 PM

One of the Netherlands' leading social psychologists made up or manipulated data in dozens of papers over nearly a decade, an investigating committee has concluded.

Diederik Stapel was suspended from his position at Tilburg University in the Netherlands in September after three junior researchers reported that they suspected scientific misconduct in his work. Soon after being confronted with the accusations, Stapel reportedly told university officials that some of his papers contained falsified data. The university launched an investigation, as did the University of Groningen and the University of Amsterdam, where Stapel had worked previously. The Tilburg commission today released an interim report (in Dutch), which includes preliminary results from all three investigations. The investigators found "several dozens of publications" in which fictitious data has been used. Fourteen of the 21 Ph.D. theses Stapel supervised are also tainted, the committee concluded.

Stapel issued a statement today in which he apologizes to his colleagues and says he "failed as a scientist" and is ashamed of his actions. He has cooperated to an extent by identifying papers with suspect data, according to university officials. The investigation by the three universities is ongoing and should ultimately investigate more than 150 papers that Stapel has published since 2004, including a paper earlier this year in Science on the influence of a messy environment on prejudice. "People are in shock," says Gerben van Kleef, a social psychologist at the University of Amsterdam, who did not work directly with Stapel. "Everybody wonders how this could have happened and at this proportion."

Stapel's work encompassed a broad range of attention-catching topics, including the influence of power on moral thinking and the reaction of psychologists to a plagiarism scandal. The committee, which interviewed dozens of Stapel's former students, postdoctoral researchers, co-authors, and colleagues, found that Stapel alone was responsible for the fraud. The panel reported that he would discuss in detail experimental designs, including drafting questionnaires, and would then claim to conduct the experiments at high schools and universities with which he had special arrangements. The experiments, however, never took place, the universities concluded. Stapel made up the data sets, which he then gave the student or collaborator for analysis, investigators allege. In other instances, the report says, he told colleagues that he had an old data set lying around that he hadn't yet had a chance to analyze. When Stapel did conduct actual experiments, the committee found evidence that he manipulated the results.

Many of Stapel's students graduated without having ever run an experiment, the report says. Stapel told them that their time was better spent analyzing data and writing. The commission writes that Stapel was "lord of the data" in his collaborations. It says colleagues or students who asked to see raw data were given excuses or even threatened and insulted.

At least two earlier groups of whistleblowers had raised questions about Stapel's work, the commission found. No one followed up on their concerns, however. Stapel's fabrications weren't particularly sophisticated, the committee says, and on careful inspection many of the data sets have improbable effect sizes and other statistical irregularities. His colleagues, when they failed to replicate the results, tended to blame themselves, the report says. Among Stapel's colleagues, the description of data as too good to be true "was a heartfelt compliment to his skill and creativity," the report says.

The report recommends that the universities of Groningen and Tilburg look into whether criminal charges are appropriate based on the misuse of research funds and possible harm to Stapel's students resulting from the fraud. The University of Amsterdam, where Stapel did his Ph.D., has apparently not been able to determine whether his thesis was fraudulent or not, in part because some of the original data records were destroyed. The committee suggests that the university consider revoking Stapel's degree, however, based on conduct that is "unbecoming" to the degree holder. (The University of Konstanz in Germany revoked disgraced physicist Jan Hendrik Schön's Ph.D. for that reason.)
social interaction is an interruption.

shape or be shaped.

Dean of Beatdowns
Posts: 10480
Joined: Sat May 15, 2010 10:34 am

Re: how reliable is "science"

Post by Info » Wed Nov 30, 2011 11:27 am

How a collapsing scientific hypothesis led to a lawsuit and arrest
By John Timmer, Nov 30, 2011

In 2006, scientists announced a provocative finding: a retrovirus called XMRV, closely related to a known virus from mice, was associated with cases of prostate cancer. But other labs, using different sets of patients, found no evidence of a viral infection. Before the controversy could be sorted out, another research group published a 2009 paper containing an even more intriguing claim. XMRV, it said, was associated with chronic fatigue syndrome (CFS), a disorder that some had claimed was purely psychosomatic.

Reaction came quickly. The CFS community, viewing a viral cause as a validation of their malady, embraced the finding. One author of the XMRV/CFS paper, Judy Mikovits, landed a position as research director of a private foundation dedicated to CFS. A company associated with the foundation started offering tests for infections.

Then the story took a strange turn. A long chain of events led not only to the collapse of the XMRV hypothesis, but it landed Mikovits in jail—and brought death threats upon some of the researchers who debunked her ideas.

The science falls apart

As reports of XMRV findings appeared in the scientific literature, the federal government grew alarmed. It organized a special team tasked with trying to figure out whether XMRV represented a real threat to the nation's blood supply. At the same time, a number of labs were already working on CFS patients and quickly used their existing samples to look for the virus. The hunt was on.

The first results weren't promising. Retroviruses, as ably demonstrated by HIV, can be notoriously difficult to detect. The immune system does not efficiently generate antibodies against them and infections often persist at extremely low levels, making other forms of detection technically challenging. So it wasn't a huge shock that some labs quickly reported having trouble finding XMRV in other CFS patients, just as they had trouble finding it in prostate cancer patients.

Repeated failure to detect the virus eventually prompted some researchers to consider alternate explanations for the original findings. The resulting work, summarized here, effectively ended most worries about the virus. Two studies showed that samples which had tested positive for XMRV also showed signs of being contaminated with genetic material from mice—remember, XMRV looks like a mouse virus. Another showed that some commercial kit suppliers had also allowed their materials to be contaminated by material derived from mice. This made a strong case that XMRV's presence was a mere matter of contamination (though possibly one that was entirely outside the researchers' control).

The key piece of evidence came in an evolutionary analysis of XMRV origins. Researchers found that the most diverse group of XMRV sequences come from a single prostate cancer cell line called 22Rv1 that was grown in lab dishes. All of the XMRV sequences isolated from patients clustered within the evolutionary tree derived from the cancer cell line, meaning the ancestors of the viruses supposedly found in patients had all come from a single lab-grown cancer cell line. The clear implication is that the sequences came from the cell lines rather than patients.

How did a mouse virus get into this cell line in the first place? It turns out that 22Rv1 cells are commonly implanted into immune-compromised mice in order to test various approaches to detecting and controlling cancer. The cells probably picked up the XMRV virus during one of these procedures.

In parallel to this work, the federal government's Blood XMRV Scientific Research Working Group continued under the auspices of the National Institutes of Health and the Department of Health and Human services. It organized the distribution of samples to nine separate labs, with the labs blinded to the disease status of the samples. The results, published in the same journal where the CFS paper first appeared, were definitive. The concluded that "current assays do not reproducibly detect XMRV/MLV in blood samples and that blood donor screening is not warranted."

The state of play was now settled: XMRV detection was an artifact, the product of various forms of contamination, and it was derived from a cell line that had picked up the virus during experiments involving mice. At this point, there was no indication that the people who had associated the virus with any disease had done anything wrong. The appropriate response would be to accept the weight of the evidence and move on to other projects; several of the researchers did exactly that.

Judy Mikovits, however, did not.

Unscientific behavior

Even as the evidence against XMRV began to build, Mikovits dismissed the failure of other labs to replicate her work as technical shortcomings, while defending her own research and continuing to suggest that patients with CFS should be tested for XMRV. Many of her former collaborators parted ways at various points as more evidence rolled in.

Some portion of the CFS patient community, elated by the original virus finding, also refused to abandon the idea. The same account that described Mikovits' tenacity describes how a talk about her research received a glowing reception from patients and advocates; some left gifts for her, or even had bumper stickers printed saying, "It's the virus/XMRV."

In some cases, boosters of Mikovits and her ideas have allegedly taken a darker turn, one focused on bringing down the researchers who produced contrary findings. Indeed, many of the tactics sound similar to the ones employed against climate scientists: as the Guardian noted, activists have "bombarded researchers with freedom of information requests, made rounds of complaints to university ethical committees about scientists' behavior, and sent letters falsely alleging that individual scientists are in the pay of drug and insurance companies." Other researchers have reportedly faced death threats.

This sort of behavior is in no way linked to Mikovits, and there is no reason to think she would condone it. It is, however, enough to discourage people from entering the field, which could skew future research on the causes of CFS or prevent a better understanding of the spread of XMRV.

Although Mikovits wasn't condoning the antiscientific behavior of some CFS advocates, there were indications that she engaged in erratic behavior of her own.

From bad news to surreal events

Some of the data in Mikovits' original Science paper turned out to be a result of the contamination identified by other labs; a partial retraction of that paper was issued in September, in the same edition that contained the nine-lab, government-backed study.

About a week later, Mikovits was out of a job. One of her former collaborators had requested a cell line used in her work, and she refused. The Whittemore Peterson Institute for Neuro-Immune Disease a private institute associated with the University of Nevada's medical school in Reno, and where Mikovits was director—became involved. Mikovits was apparently asked by the Institute to provide the cells. Again she refused, and was fired for insubordination.

That wasn't even the worst part of her week. A science blogger who does research on retroviruses obtained a copy of a slide used in a talk by Mikovits. It showed some of the same data used in the original Science paper, but the data had been relabeled and was then described as part of a very different experiment. Even a charitable interpretation of Mikovits' attempts to explain the discrepancy indicates a serious lapse of research ethics. (Oddly, Mikovits claims that she had a dispute with the Institute that was focused on its partnership in producing the tests for XMRV. As noted above, an earlier article quotes her as endorsing those tests.)

By the beginning of November, her situation was completely unravelling. Not only was Mikovits unwilling to turn over a cell line to other researchers, but she took her lab notebooks and various computer files with her when she was fired from Whittemore Peterson. These were the property of the institute, which responded by filing a lawsuit demanding their return and receiving a temporary restraining order that required Mikovits to preserve the materials.

Although Mikovits retained a lawyer who contested the charges, within a week she was arrested in California. The charge: possession of stolen property. Apparently, the Whittemore Peterson Institute was taking no chances when it came getting its notebooks back. In yet another surreal twist, ScienceInsider was told that the charges were related to a break-in that occurred on November 9—several days after the lawsuit was filed.

The science worked like it should, even when people didn't

There's no shortage of human frailty on display in this story. Mikovits was clearly wedded to her idea long after the evidence supporting it should have been convinced her otherwise. At best, she clearly had a lax attitude towards accurately presenting research results; moreover, she felt possessive of her data and resources. Even if the charges about the lab notebooks and computer files end up being overblown, the fact that she was refusing to send cells to former collaborators is itself a significant breach of scientific ethics.

It's no surprise that patients who frequently had their disorder treated with dismissiveness would respond positively to indications that it had a concrete, biological cause. But demonizing scientists who don't support something that appeals to you is never going to end well, especially when all indications are that the scientists are being careful and thorough. Unfortunately, we're now seeing more of this sort of behavior in areas as diverse as climate change, vaccine safety, and animal research.

If individual humans come out looking badly here, some of their institutions performed remarkably. Most journals, funding bodies, and research institutions have requirements for the sharing of published reagents precisely to block the sort of behaviors that Mikovits allegedly engaged in. For the same reason, research materials are the property of the institution where research is performed, rather than the property of individual researchers (although this is also a bit of self-interest, as the institutions get to keep intellectual property).

The Department of Health and Human Services, part of a government that's often derided as a paragon of inefficiency, managed to recognize a potential threat to the nation's blood supply, organize a consortium of research groups with relevant expertise at nine different institutions, arrange to give them all blinded samples, and get the resulting publication out. Interim results were also published along the way. Anyone who has experienced how difficult it can be to get academics to agree on anything will be doubly impressed with all that the working group accomplished.

The publishing system also seems to have acquitted itself well. Even though XMRV detection was clearly somewhat iffy in the prostate cancer studies, Science was willing to publish the original paper provided that its reviewers said the data looked solid. Other researchers weren't automatically convinced by publication in a high-profile journal, and they quickly set about trying to replicate it in different sample populations. The results, even though they were published in a lower-profile, open access journal called Retrovirology, proved persuasive and helped build a scientific consensus against the XMRV/CFS link.

These features are all necessary parts of scientific self-correction. Frequently, non-scientists view the corrective process as one where people question some results and attempt to perform an exact reproduction of the experiments that generated them. That's not what usually happens. Instead, the best questions usually focus on the consequences of the result—what should we be seeing if this is right?

In this case, various researchers looked at the initial XMRV results and determined that, if they were correct, we should see similar things using different assays and with samples from different patients. When we didn't, the results raised questions about the whole hypothesis. If evidence in the idea's favor couldn't be found by anyone else, it would look shaky even if we assume that the original experiments had all been done properly.

Those sorts of questions, which focus on the consequences and prompt an inexact form of replication, are essential to ensuring that the scientific record remains robust over the long term.
irony: even from this article, you can still see how people are erroneously taught to personify science into an infallible god instead of rationally realizing it is merely a tool, no better than the user employing it.

EVERYONE is a 'scientist.' the only thing in question is our degree of personal accuracy.
social interaction is an interruption.

shape or be shaped.

User avatar
Jedi Bonersaber
Posts: 404
Joined: Mon Sep 06, 2010 10:45 am

Re: how reliable is "science"

Post by Gzeiger » Fri Jan 06, 2012 3:37 pm

The Gender Equality Paradox - Documentary NRK - 2011

I watched the video - pretty disturbing. Just seeing the feminist researcher's faces when they were presented with the evidence at the end was worth it - they seemed so uncomfortable, like they where thinking "why is this guy asking us about this - why doesn't he get with the program already?". The woman seemed to be very comfortable with the fact that she was being irrational - she just admitted that it was their jobs to maintain the doctrine no matter what the evidence said.

The male researcher, on the other hand, wanted to be the one with the facts on his side. But when presented with conflicting evidence, he didn't stop and pause for a SECOND. You could see it in his eyes that he had dismissed the research long before he even thought to find weaknesses in methodology or anything of the sort, He just said "it's a weak study", with no specific argument.
"The 'training' starts when you wake up in the morning, and ends when you fall asleep." -Plum

User avatar
Jedi Bonersaber
Posts: 401
Joined: Sun Jul 17, 2011 12:12 am
Location: Home of the first In-N-Out

Re: how reliable is "science"

Post by Torero » Fri Jan 06, 2012 8:17 pm

"What is your scientific basis to say that biology plays no part in two genders choice of work?"

-"Uhhhhhhhhhmmmmmmmmmmm, I have what you would call a theoretical basis. There's no room for biology in there for me"

This about sums it up.
"I never met a bitch that didn't need a little guidance."

Pillow Fort

how reliable is "science"

Post by Pillow Fort » Sat Jan 21, 2012 11:44 pm

This is one interesting thread.

It’s hard to escape the fact that there is a strong streak of technophilia in western culture.

The problem I find with this mentality is that the technology has become an ends in and of itself, completely divorced from its most fundamental essence; namely its function.

As a result there is no shortage of seemingly arbitrary applications of technology. To make something as trivial in function as an iPod or Facebook are two of many examples. This is paraded around as proof that we live in the most technologically advanced civilization in history.

But to me this is silly, in that regard I consider a bicycle or a pencil to be more technologically “advanced” than an iPod or Facebook simply by virtue of its more useful function.

I read a book called Fast Food Nation which has something very interesting to say about this.
No society in human history worshiped science more devoutly or more blindly than the Soviet Union, where “scientific socialism” was considered the highest truth. And no society has ever suffered so much environmental devastation on such a massive scale.
Though I don’t believe the Soviets were the premier worshippers of technology, that honor certainly belongs to the United States and Japan, but I think the consequences still stand.

This science worship also has many characteristics of a religion; like the claim that science is the only truth and all other forms of thinking are inferior.

The further we divide and compartmentalize the reality around us the more we miss its true essence; the authentic and wholesome experience of life the way nature intended it to be experienced.

User avatar
Old enough to buy first Playboy
Posts: 25
Joined: Mon Jan 16, 2012 4:32 pm
Location: Los Angeles, CA

Re: how reliable is "science"

Post by iamjosan » Sat Feb 11, 2012 1:40 am

Prof. Plum, you are the first person I know that has shared the same view as me on science: most of science is theory! Most people treat scientific theories as fact and don't even bother to question any of them. And if you're caught questioning science, people attack you with their so-called democracy. They think just because the majority thinks it's correct, then it must be correct. This is like your example of having democracy between 100 children and 2 parents and the children want to eat ice cream but adults want to feed them veggies. Give the majority what they want, this is a democracy; fuck that, y'all don't know any better!


Re: how reliable is "science"

Post by Fasty » Tue Feb 14, 2012 5:17 pm

There are a lot of studies out there and statistics that can't be trusted at face value. I've read that saturated fat is bad for you in one article but in another it says it's good. There are stats that say 1 in every 5 girls will be raped, but there's a chance the women who submitted their answers thought of a scenario that they PERCEIVED as rape but not, and there's also the chance that the publisher could have changed the question so that participants would answer in a way that would benefit him.

Numbers, results, and other forms of empirical evidence do not lie. People do. I've realized this at my summer internship at a research institute when there was a presentation about the falsified report that linked autism to vaccines. In order to truly accept something from the scientific community, you would have to do a lot more research yourself in order to find many other studies that support it, or you would have to see if the people who funded the study in question would benefit heavily from a positive publication. There are probably a lot more ways that I'm missing, but the point is the average person doesn't want to do this much work.

Dean of Beatdowns
Posts: 10480
Joined: Sat May 15, 2010 10:34 am

Re: how reliable is "science"

Post by Info » Sun Apr 29, 2012 11:41 am

A Sharp Rise in Retractions Prompts Calls for Reform
By CARL ZIMMER, Published: April 16, 2012

In the fall of 2010, Dr. Ferric C. Fang made an unsettling discovery. Dr. Fang, who is editor in chief of the journal Infection and Immunity, found that one of his authors had doctored several papers.


It was a new experience for him. “Prior to that time,” he said in an interview, “Infection and Immunity had only retracted nine articles over a 40-year period.”

The journal wound up retracting six of the papers from the author, Naoki Mori of the University of the Ryukyus in Japan. And it soon became clear that Infection and Immunity was hardly the only victim of Dr. Mori’s misconduct. Since then, other scientific journals have retracted two dozen of his papers, according to the watchdog blog Retraction Watch.

“Nobody had noticed the whole thing was rotten,” said Dr. Fang, who is a professor at the University of Washington School of Medicine.

Dr. Fang became curious how far the rot extended. To find out, he teamed up with a fellow editor at the journal, Dr. Arturo Casadevall of the Albert Einstein College of Medicine in New York. And before long they reached a troubling conclusion: not only that retractions were rising at an alarming rate, but that retractions were just a manifestation of a much more profound problem — “a symptom of a dysfunctional scientific climate,” as Dr. Fang put it.

Dr. Casadevall, now editor in chief of the journal mBio, said he feared that science had turned into a winner-take-all game with perverse incentives that lead scientists to cut corners and, in some cases, commit acts of misconduct.

“This is a tremendous threat,” he said.

Last month, in a pair of editorials in Infection and Immunity, the two editors issued a plea for fundamental reforms. They also presented their concerns at the March 27 meeting of the National Academies of Sciences committee on science, technology and the law.

Members of the committee agreed with their assessment. “I think this is really coming to a head,” said Dr. Roberta B. Ness, dean of the University of Texas School of Public Health. And Dr. David Korn of Harvard Medical School agreed that “there are problems all through the system.”

No one claims that science was ever free of misconduct or bad research. Indeed, the scientific method itself is intended to overcome mistakes and misdeeds. When scientists make a new discovery, others review the research skeptically before it is published. And once it is, the scientific community can try to replicate the results to see if they hold up.

But critics like Dr. Fang and Dr. Casadevall argue that science has changed in some worrying ways in recent decades — especially biomedical research, which consumes a larger and larger share of government science spending.

In October 2011, for example, the journal Nature reported that published retractions had increased tenfold over the past decade, while the number of published papers had increased by just 44 percent. In 2010 The Journal of Medical Ethics published a study finding the new raft of recent retractions was a mix of misconduct and honest scientific mistakes.

Several factors are at play here, scientists say. One may be that because journals are now online, bad papers are simply reaching a wider audience, making it more likely that errors will be spotted. “You can sit at your laptop and pull a lot of different papers together,” Dr. Fang said.

But other forces are more pernicious. To survive professionally, scientists feel the need to publish as many papers as possible, and to get them into high-profile journals. And sometimes they cut corners or even commit misconduct to get there.

To measure this claim, Dr. Fang and Dr. Casadevall looked at the rate of retractions in 17 journals from 2001 to 2010 and compared it with the journals’ “impact factor,” a score based on how often their papers are cited by scientists. The higher a journal’s impact factor, the two editors found, the higher its retraction rate.

The highest “retraction index” in the study went to one of the world’s leading medical journals, The New England Journal of Medicine. In a statement for this article, it questioned the study’s methodology, noting that it considered only papers with abstracts, which are included in a small fraction of studies published in each issue. “Because our denominator was low, the index was high,” the statement said.

Monica M. Bradford, executive editor of the journal Science, suggested that the extra attention high-impact journals get might be part of the reason for their higher rate of retraction. “Papers making the most dramatic advances will be subject to the most scrutiny,” she said.

Dr. Fang says that may well be true, but adds that it cuts both ways — that the scramble to publish in high-impact journals may be leading to more and more errors. Each year, every laboratory produces a new crop of Ph.D.’s, who must compete for a small number of jobs, and the competition is getting fiercer. In 1973, more than half of biologists had a tenure-track job within six years of getting a Ph.D. By 2006 the figure was down to 15 percent.

Yet labs continue to have an incentive to take on lots of graduate students to produce more research. “I refer to it as a pyramid scheme,” said Paula Stephan, a Georgia State University economist and author of “How Economics Shapes Science,” published in January by Harvard University Press.

In such an environment, a high-profile paper can mean the difference between a career in science or leaving the field. “It’s becoming the price of admission,” Dr. Fang said.

The scramble isn’t over once young scientists get a job. “Everyone feels nervous even when they’re successful,” he continued. “They ask, ‘Will this be the beginning of the decline?’ ”

University laboratories count on a steady stream of grants from the government and other sources. The National Institutes of Health accepts a much lower percentage of grant applications today than in earlier decades. At the same time, many universities expect scientists to draw an increasing part of their salaries from grants, and these pressures have influenced how scientists are promoted.

“What people do is they count papers, and they look at the prestige of the journal in which the research is published, and they see how many grant dollars scientists have, and if they don’t have funding, they don’t get promoted,” Dr. Fang said. “It’s not about the quality of the research.”

Dr. Ness likens scientists today to small-business owners, rather than people trying to satisfy their curiosity about how the world works. “You’re marketing and selling to other scientists,” she said. “To the degree you can market and sell your products better, you’re creating the revenue stream to fund your enterprise.”

Universities want to attract successful scientists, and so they have erected a glut of science buildings, Dr. Stephan said. Some universities have gone into debt, betting that the flow of grant money will eventually pay off the loans. “It’s really going to bite them,” she said.

With all this pressure on scientists, they may lack the extra time to check their own research — to figure out why some of their data doesn’t fit their hypothesis, for example. Instead, they have to be concerned about publishing papers before someone else publishes the same results.

“You can’t afford to fail, to have your hypothesis disproven,” Dr. Fang said. “It’s a small minority of scientists who engage in frank misconduct. It’s a much more insidious thing that you feel compelled to put the best face on everything.”

Adding to the pressure, thousands of new Ph.D. scientists are coming out of countries like China and India. Writing in the April 5 issue of Nature, Dr. Stephan points out that a number of countries — including China, South Korea and Turkey — now offer cash rewards to scientists who get papers into high-profile journals. She has found these incentives set off a flood of extra papers submitted to those journals, with few actually being published in them. “It clearly burdens the system,” she said.

To change the system, Dr. Fang and Dr. Casadevall say, start by giving graduate students a better understanding of science’s ground rules — what Dr. Casadevall calls “the science of how you know what you know.”

They would also move away from the winner-take-all system, in which grants are concentrated among a small fraction of scientists. One way to do that may be to put a cap on the grants any one lab can receive.

Such a shift would require scientists to surrender some of their most cherished practices — the priority rule, for example, which gives all the credit for a scientific discovery to whoever publishes results first. (Three centuries ago, Isaac Newton and Gottfried Leibniz were bickering about who invented calculus.) Dr. Casadevall thinks it leads to rival research teams’ obsessing over secrecy, and rushing out their papers to beat their competitors. “And that can’t be good,” he said.

To ease such cutthroat competition, the two editors would also change the rules for scientific prizes and would have universities take collaboration into account when they decide on promotions.

Ms. Bradford, of Science magazine, agreed. “I would agree that a scientist’s career advancement should not depend solely on the publications listed on his or her C.V.,” she said, “and that there is much room for improvement in how scientific talent in all its diversity can be nurtured.”

Even scientists who are sympathetic to the idea of fundamental change are skeptical that it will happen any time soon. “I don’t think they have much chance of changing what they’re talking about,” said Dr. Korn, of Harvard.

But Dr. Fang worries that the situation could be become much more dire if nothing happens soon. “When our generation goes away, where is the new generation going to be?” he asked. “All the scientists I know are so anxious about their funding that they don’t make inspiring role models. I heard it from my own kids, who went into art and music respectively. They said, ‘You know, we see you, and you don’t look very happy.’ ”
social interaction is an interruption.

shape or be shaped.


Re: how reliable is "science"

Post by JustSomebodyHead » Sun Apr 29, 2012 8:46 pm

There are bad apples in any type of enterprise. Risks are necessary, and problems like this will happen all the time.

It really comes to a Gain/Loss game. For example, scam websites that trick emasculated men into paying for their shit ez-tricks are the rule not the exception in the men's support realm. Yet you would never dismiss as useless, even in an extreme case like this. This also applies to science. If the rate of unethical papers published are kept to a certain percentage were the gains still worth the hassle, then science is still reliable.

Dean of Beatdowns
Posts: 10480
Joined: Sat May 15, 2010 10:34 am

Re: how reliable is "science"

Post by Info » Mon Apr 30, 2012 1:01 am

JustSomebodyHead wrote:This also applies to science. If the rate of unethical papers published are kept to a certain percentage were the gains still worth the hassle, then science is still reliable.
You're completely missing the point here.

Science ISN'T reliable because science ISN'T a PERSON.

It's not that 'science' is bad--science was never a person to begin with. It's that there's no such thing as a "scientist". This is an authority fallacy perpetuated by elitest assholes who don't even grasp the concept of the scientific method.

Everyone is a scientist. Even irrational ghetto bitches on Jerry Springer are scientists who develop conclusions (although granted, they are very poor conclusions). The only difference is in the degree of training. But most people today see a white lab coat and immediately assume the conclusion is infallible and that only a person wearing a white lab coat can make an authoritative conclusion. This is the hallmark of idiocy. This is the type of faulty 'scientific' comprehension exhibited by the typical intellectual today.

This thread is designed to point out this flawed approach to science. It is here to warn men about the danger of trusting in faulty methods and flawed surveys instead of relying on a rational, reasonable approach to investigation--one employs essential public scrutiny rather than eliest "scientific" publications.
social interaction is an interruption.

shape or be shaped.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest