Science Commmons Symposium Recap

I was all set to do a comprehensive summary of the Science Commons Symposium, but several others have beaten me to it!  However, I think I can add some of my impressions from the librarian point of view.

First off, this is the first meeting I've attended where I actively participated over Twitter during the presentations.  It was really interesting to follow this additional layer of conversation.  I had also created a Friendfeed account ages ago, but have mostly used it to aggregate my own content.  Sometimes understanding a new social networking tool requires seeing it in action, which I certainly did this weekend.

Brian Glanz of the Open Science Foundation wrote up a great roundup, including links to any available slides for each speaker. If you want the slides, go visit this page, I didn't link them below.

Jean-Claude Bradley, one of the speakers, also wrote up a quick recap.

Steve Koch also wrote up some notes (and on Saturday night, too!) including a mindmap!

Secondly, thanks to Microsoft Research for hosting us (best box lunch I've ever had!) and for the free bookThe Fourth Paradigm: Data Intensive Scientific Discovery is the first book published by Microsoft under a Creative Commons license, so make sure to snag a downloadable copy.
Microsoft Campus


So, on to my value-add.
  • A theme that emerged from several presenters (Cameron Neylon, Jean-Claude Bradley, Stephen Friend of Sage Bionetworks, John Wilbanks) was the problem of handling the exponential, explosive growth of data.  There are problems archiving it, problems with standardized ways of processing and handling it, and problems retrieving it.  Doesn't this sound like it's right up our alley?  And why aren't librarians doing more in this arena?
  • Similarly, another theme was the eroding trust in traditional peer-review publications.  Jean-Claude pointed out a few specific examples of  seriously flawed articles making it to publication, Antony Williams of ChemSpider showed examples of published articles linking to incorrect molecular structures found on wikipedia (seriously!), and Peter Murray-Rust said that a serious flaw is that the peer-reviewed system doesn't have a method for making authors take responsibility for their data, instead relies too much on trust.  While there were some questions about weeding out the crap in open access or non-peer-reviewed science publishing (disclaimer: I do NOT mean to imply in the slightest that OA publishing=no peer-review. Different things entirely), these examples seem to show that traditional peer-review isn't doing such a great job of this.  Peter Binfield of PLoS ONE also went into detail about how the peer-review system is flawed, and talked about how journals (like PLoS ONE) are changing the system.  If some of these trends continue, it will have serious impact on library subscriptions.  If a fundamental breakdown in the trust in the peer-review system occurs, open access journals are the least of our concern.
  • Peter Murray-Rust was one of the only speakers to refer specifically to librarians when he stated that "librarians are not doing enough to make data open." He was talking specifically about electronic theses and dissertations at the time. One of the things that was especially interesting to me here was the Twitter conversation: Plausible (not a librarian) thought that libraries have largely been left out of the conversation.  Scilib (a librarian) seemed to agree with Murray-Rust, not just about ETDs, but about data in general.  I tend to agree with both - though we may be left out, that doesn't mean we can't butt in, especially since we are the single largest market for all the publishers and vendors who sell this stuff.
  • While I'm on the subject of Peter Murray-Rust, I spoke with him during the reception afterward and asked him whether he thought librarians had a place in curating data.  He didn't agree with that idea (although I said "curating" and didn't break it down as I did above - I think he thinks the curating should be left to the scientists, but that could be my own interpretation).  What he DID want from librarians is more help with the grant-writing process: tracking down potential funders, examples of successful grants, etc.  I thought that was particularly interesting.
  • John Wilbanks gave a very stimulating keynote to wrap up the symposium (note to self - I like the keynote at the end of the conference, a nice way to finish things up!). He talked generally about the challenges of bringing creative commons-style licensing to data and the differences between the relationship of creative commons to copyright and similar licensing to patents or material transfer agreements.  He said that trying to put data into copyright licenses breaks, and the only real solution to put data completely into the public domain (i.e. a Creative Commons Zero license)- which of course opens up a whole host of other complications.   He said that one of the things that will free data is shared names, that common names and common formats facilitate running a structured query in order to discover things.  Hmm, why does that sound familiar? (Okay, maybe a simplistic leap, but you get my point).
  • Wilbanks also had a few inspirational quotes (or paraphrases from notes, anyway): "Generativity offers us a innovation-based chance for success." "Science began in the garage, now it's going back." "The richest people in the world can't buy cures to diseases." and "citation can scale in a way that attribution can't."
Finally, I want to just round up a bunch of links to some of the cool things that were mentioned:
 I didn't write much about Heather Joseph's presentation, because I've heard much of that before.  However, I've uploaded my notes to Google Docs, so if you're interested in further ramblings, feel free to take a look.

Update:  because I posted this on the Friendfeed Group, I thought I'd try to embed any ensuing discussion:

    Comments

    1. I take issue with the idea that peer review is broken. Imperfect, yes, but it's not broken.

      The problem isn't that bad science gets through; you HAVE to allow some bad science to be published if you're also going to allow any kind of breakthrough to get published. One is, at any one instant, generally indistinguishable from the other, assuming basic good practices are followed*.

      The problem is our (the public's and others') perception that science is perfect and fast with an instant straight line from Hypothesis A to Law B. Science doesn't work that way. A lot of mistakes are made. I would guess that at best only 1% of articles published in the past decade are: 100% correct, new, important, and are at best slightly paradigm shifting.

      That's the way it works. We make imperfect conclusions based on imperfect data imperfectly gathered from imperfect observations with imperfect instruments used to test imperfect hypotheses. The peer review process is another imperfect step to sharing our imperfect results. Finally, readers are imperfect and draw imperfect conclusions based on imperfect understanding of the imperfectly written and edited article.

      *Blatant fraud or poor research (as described in your link about chemistry sources) should never make it through the peer review process, but it does because that process is imperfect, not broken. I'm willing to bet the rate is much less than 0.01% of published papers.

      ReplyDelete
    2. I don't disagree, Moses - and I do believe there should be a peer-review system of sorts. But I'm not sure that the traditional method is sufficient - there are better methods emerging, imo.

      Secondly, one of the speakers (JC Bradley) pretty much said the same thing about science (from my notes): there NO FACTS, only measurements embedded within assumptions - maintain the integrity of the data by making all assumptions explicit - trying to move from assumption to proof. Or something like that, which to me is related to your point - maybe he'll come by and clarify. :-)

      ReplyDelete
    3. First, I'm not sure that a few incidents of exposed fraud, bad reviews, poor editing, or other bad science constitutes a crisis in the peer review system. We hear about a few high profile problems, but we don't ever hear about the overwhelming majority of good work being published because that's an every-day occurrence.

      Sure, there are ALWAYS going to be errors in the peer reviewed literature, but they're usually inconsequential. That's because, as I alluded to in my first post, most publications are just tiny grains of sand in the pyramid of science--they could be there or not and the machinery of human inquisition will continue regardless.

      For the most part, the important errors are rooted out and corrected relatively quickly. Consider "cold fusion", the cloning fiasco, etc. After only a few years or even less, labs from around the world exposed the work as not reproducible. Of course there are some more insidious errors that make their way through the literature, and it may take decades to correct them. This is a problem, but I don't see it ever being fixed as long as humans are part of the peer review process.

      I am 100% pro open access and am stuck in the catch-22 of publish in a well-known but non-OA journal or publish in an obscure OA journal every time I publish a paper. (Open access and peer review are not the same issue. Open notebooks and peer review are not the same issue. Open access and open notebooks are not the same issue.)

      I am not a proponent of the model of allowing any anonymous person to comment on any article. I think anonymizing and democratizing the process of science in this way will end up killing it because the good science will always be drowned out by garbage spewed out by people with an agenda. For examples of this, look at Wikipedia and look at the science of studying human-induced global climate change and how the science is irrelevant to most policy decisions.

      ReplyDelete
    4. I hope you don't think that I'm advocating the elimination of the peer-review process. However, there have been a lot of problems that have evolved in the system: it's too political, there are people who "game" the system, there isn't enough accountability, etc. Why can't/shouldn't we look for something better?

      And I don't mean having just anyone review. I think the "peer" part is critical - I certainly wouldn't be qualified to review a paper you've written. I don't have the education, the background, the understanding to know if the science is sound.

      Of course there are errors and there always will be, but I think the errors are just a part of an overall problem with the system. I think you should check out Peter Binfield's slides (from PLoS ONE) above, if you haven't already.

      ReplyDelete
    5. I don't think you're advocating the elimination of the peer review process, but I don't agree that it's broken. I think there are probably some things that could be improved. However, there are issues with every suggested improvement I've seen. That's not to say they aren't good ideas, but I'm not convinced any of them will fix problems without introducing new, and often worse ones.

      I'd just like to repeat that I'm not at all convinced that the peer review system that exists (and it's slightly different for every field, and pretty much every journal) is fundamentally broken. I have seen nothing more than a few anecdotes about generally random problems (a typo leading to a misinterpretation, or a reviewer rejecting a paper because of a personal issue, and a few instances of high profile fraud), but I have seen nothing systemic. If there are studies that suggest there are systemic problems, I don't know about them (of course that doesn't mean they don't exist); I haven't seen them cited in any discussions about this "problem."

      So, my contention is that the current paradigm of peer review is good enough until it can be shown that it's not (a few anecdotes is not convincing). This is the way it is with all science: We don't change paradigms quickly because we trust what has been shown to work, however imperfectly, until a different paradigm can be shown to be more accurate, more precise, or more whatever. This is why few scientists think Ptolemy was an idiot. He was completely wrong about the universe, but he wasn't an idiot.

      Consider that Einstein didn't receive his nobel prize in physics because of his work on relativity, but because of his work demonstrating the photoelectric effect. His relativity work was paradigm shifting and therefore was "wrong" until it was shown to explain the precession of perihelion of Mercury's orbit to very high precision, among a few other things that classical mechanics had problems with.

      This slowness is not because scientists are stubborn (some are, of course), but because adopting new paradigms randomly is counter productive. Extraordinary claims require extraordinary evidence. I see neither extraordinary problems with our peer review system, nor extraordinary proof that any of the proposed alternatives will be better.


      The problem with PLOS One and "peer" review is this:

      It took me less than 10 seconds to register at PLOS One. They didn't ask for ANY kind of documentation. They allow me to comment on ANY article, whether I'm an expert or not. There are some feel-good warning words about not accusing anyone of misconduct, of supporting all comments, etc., but it's not at all obvious that a clever enough person would be caught misleading future readers of any given article. That is, the comment process is too easy.

      I could have easily registered as Steven Chu and commented all I wanted on any random physics paper. Before the damage was discovered, some of those comments could have made it into the press and Secretary Chu's reputation, the reputation of the authors of those articles, and the reputation of the journal would have been sullied. So, while I applaud PLOS for guaranteeing open access, I have no plans to submit an article to them while they allow anonymous or possibly fraudulent comments to be attached to any article. I don't know how they can fix it while maintaining some modicum of the broad access they desire, but I don't think that what they have necessarily makes the scientific process better.

      Quite simply, science is not democracy. It may sound elitist or rude, but not everyone can do science, and even great scientists can't do much outside of their field. I certainly couldn't honestly comment on a research paper about developmental biology, and I don't want an oncologist or a tinfoil hat person to be able to permanently attach their comments to my articles.

      ReplyDelete
    6. As I was browsing my Google Reader Feeds, I came across an interesting anecdote that to some would indicate what's wrong with the way peer review works but to me is a perfect example of why peer review works well.

      A (peer reviewed) study was published concerning some unusual variations in the level of a lake on the main island of a chain of islands near the Strait of Magellan. These variations were used to study the tidal effects on the island, giving the authors some hint about the lithosphere under the lake.

      A comment (peer reviewed also) was published taking issue with the original study's (OS) claims that the lithosphere could be studied in this manner because of various problems with modeling and assumptions in the OS.

      A reply (peer reviewed of course--replies to comments are commonly published alongside the comments) was published by the authors of the OS. They noted that this comment encouraged them to go back through their work step-by-step and they discovered some inconsistencies in some software tools that introduced some errors. After correcting those errors, they were still comfortable in arguing that they could study the lithosphere beneath the lake.

      I bring this up because it, to me, is a great anecdote about how the peer review system DOES work. Others may view it as a failure of the system because the errors that even the authors didn't see were published, and it took another publication to point out problems with the original study before the errors were found.

      To me, this is exactly the way it's supposed to work. There's the initial editors' decision to accept a submission, then the peer reviews, which filter out most of the junk. However, there will always be disagreements in the literature, so this back-and-forth within the literature is essential and is the opposite of a problem.

      Finally, it points out where libraries are absolutely essential to maintaining a complete narrative. If a researcher were to find only the original study, they'd get incorrect information (yes, it would have been nice for it not to be there in the first place, but imperfection is a fact of life). If they were to go to a reasonably capable library, they would get that study, its references, and its derivatives and would see the discussion and correction. Open Access would make this much easier.

      Sure, there's a time lag between the OS and its derivatives, but as I've argued before, time lag is just a part of science; research takes time.

      ReplyDelete

    Post a Comment