Scientists’ (sad) behaviour as seen by Team Geek book

I’m really enjoying the book Team Geek: A Software Developer’s Guide to Working Well with Others and I strongly recommend it to any software developer out there but that´s not why I’m mentioning here.

I’m doing it because it includes a comparison between professional scientists and software developers as a way to convince software developers not to work alone and instead join the open source movement:

Professional science is supposed to be about the free and open exchange of information. But the desperate need to “publish or perish” and to compete for grants has had exactly the opposite effect. Great thinkkers don´t share ideas. They cling to them obsessively, do their research in private, hide all mistakes a long the path and then ultimately publish a paper making it osund like the whole process was effortless and obvious. And the results are often disastrous: they accidentally duplicated someone else´s work or made an undetected mistake early on … The amount of wasted time and effort is tragic

(and of wasted public money I´d add)

True, nothing really new here (I already touched this same topic in the post: “Be honest, curing cancer is not your primary goal“) but it surprised me that the same perception was shared by people outside our community. I’d say this is a good thing, the more pressure we have to change the way research is done, the better.

Posted in doing research, publishing | Leave a comment

What stats would you like to know for every conference?

In the opening session of any conference, the PC Chairs give a brief presentation of the conference. This typically includes informing about the number of abstracts and full papers submitted, the number of papers accepted, the corresponding acceptance rate and some kind of map / graphic displaying the same information by countries.

Usually, that’s it. For the opening session of ECMFA’14 we wanted to give some more (hopefully interesting) data. In the end, due to time constraints, what we gave as additional data was:

  1. Percentage of accepted papers where none of the authors was (or had been in the last four years) a PC member of the conference
  2. Acceptance rates of papers were at least one author was (or had been) a PC member
  3. Number of papers where none of the authors had participated in the community (as a PC member or author) before (again “before” means in the last 4 years)

With the first two we wanted to show that you didn’t need to be a PC member to get your paper in the conference (for ECMFA 42% of papers were from non-PC members) and that having a PC member as co-author did not increase dramatically the probability of getting your paper accepted (acceptance rate for PC co-authored papers was only 10% higher than acceptance rates for papers with no PC member). The third was a way to see how endogamic was the conference (turned out to be quite a lot since only one paper had a complete set of “fresh” authors).

Can I ask you if you would like all conferences to include these three stats in their presentation? And, regardless of your answer, what other data/statistic would you like to know about a confenrence?

Posted in organization, publishing | 15 Comments

Real Workshops do NOT publish papers

On Tuesday, I attended this workshop and I really enjoyed it a lot. The difference with other workshops? This workshop was by invitation (open to all the particpiants in the ASE PC Board meeting taking place the day after) so no call for papers, no publications of any kinds of proceedings, no restriction on the topics to talk about, no nothing.

For me, the key aspect was the fact that authors were not there to talk about any specific paper. In general, most workshops work by publishing a call for papers and you must submit a paper to the workshop in order to be able to do a presentation in the workshop. Too often, this results in a workshop full of delta papers (i.e. papers that are just a minor improvement wrt previous papers) quickly written to justify the attendance to the workshop. The problem is that, then, during the workshop, authors feel obliged to talk about that specific paper instead of taking the opportunity to have a more open discussion which results in boring and uninteresting presentations.

I’d like to see more workshops going back to their original mission: be a place for discussion and exchange of ideas, instead of becoming mini conferences!

Posted in organization, presenting | 2 Comments

Should you submit to your own workshop? My pragmatic response

A typical question in every workshop I co-organize is whether we (the organizers) are allowed to submit papers to the workshop.

I almost never do it but I have no problem with other co-organizers doing it (I’m assuming we are talking about real workshops, not about conferences disguised as workshops) with just one pragmatic condition: I don’t want this to add any extra work to my organization duties. By this I mean that I’m not going to manage papers from organizers outside EasyChair (or any other conference management system) to ensure the anonymity of the reviewers. This is not only a pain in the ass for me but also for the affected reviewers and the whole PC.

So, do you want to submit a paper to the workshop you’re organizing? Fine with me but then I´ll just trust that you’ll look the other way when your reviews start coming.

Posted in publishing | 2 Comments

Great site about spurious correlations

I’m sure any researcher is perfectly aware of the differences between Correlation and Causality (if not, read this and you´ll understand why banning Internet Explorer is not likely to stop murders in the US).

But the fact that we do know this doesn´t mean we are not tempted to forget it (causal relationships look great in papers!). In those situations, please take a look at the spurious correlations site, full of amazing data correlations (e.g. Nicholas cage appearances in films and people drowning or consumption of mozzarella cheese and number of civil engineering doctorates) for a a reality check!.

Now, jokes apart, make sure young students/researchers take a look at this site and we may avoid quite a few rejected papers in the future.

Posted in doing research, funny | 1 Comment

Calling for a Journal of Negative Results in Software Engineering

We recently completed a research work that ended with a bunch of negative results. Even if negative, we thought the results we obtained were valuable because, in our opinion, were not obvious (in fact we wanted to “prove” that the variables we studied were indeed a positive factor).

The next question was, so, where do we try to publish this?. In theory, anywhere. I mean I’ve never seen a conference or journal in Software Engineering explicitly forbidding you from submitting negative results. My problem is that I’ve not seen any that explicitly encourages you to submit them either (I’d love to be wrong so please please correct me if you can!).

There are quite a few journals in other areas specifically devoted to publishing negative results (e.g in BioMedicine, Ecology, Physics,…) but nothing in Computer Science except for this “Forum for Negative Results” published as a section of the Journal of Universal Computer Science. Nothing for Software Engineering.

That´s why I´m calling here the need for a Journal of Negative Results in Software Engineering. Anybody else thinks this would be a good idea?

(I´d also settle down for a more explicit encouragement, and acceptance, of negative results in existing conferences / journals; I do believe that many people are afraid of submitting their negative results and we are losing all those findings)

Posted in evaluating research, presenting | Tagged | 2 Comments

An insider report on the peer-review process for a top Software Engineering conference

Lionel Briand and André van der Hoek (PC Chairs of ICSE 2014, for those working on other research areas, I think it’s safe to say that ICSE is the most well-known research conference on Software Engineering) have published his analysis of the peer-review process for ICSE 2014 on the following report:

Insights and Lessons Learned from Analyzing ICSE 2014 Survey and Review Data

The first paragraphs reads as follows: “This document reports on survey and review data collected during and after the ICSE 2014 review process. Its goal is to provide full insights and lessons learned about the reviewing and decision making process that led to the selection of 99 scientific articles (out of 495) for the technical research program.”

Mouthwatering right ? If you’re a researcher (even if in a different area) I’d say this is a unique (?), let me know if you know other similar reports!, opportunity to get a better perspective of how PCs make their decisions.

Posted in evaluating research | Leave a comment