On Tuesday, I attended this workshop and I really enjoyed it a lot. The difference with other workshops? This workshop was by invitation (open to all the particpiants in the ASE PC Board meeting taking place the day after) so no call for papers, no publications of any kinds of proceedings, no restriction on the topics to talk about, no nothing.
For me, the key aspect was the fact that authors were not there to talk about any specific paper. In general, most workshops work by publishing a call for papers and you must submit a paper to the workshop in order to be able to do a presentation in the workshop. Too often, this results in a workshop full of delta papers (i.e. papers that are just a minor improvement wrt previous papers) quickly written to justify the attendance to the workshop. The problem is that, then, during the workshop, authors feel obliged to talk about that specific paper instead of taking the opportunity to have a more open discussion which results in boring and uninteresting presentations.
I’d like to see more workshops going back to their original mission: be a place for discussion and exchange of ideas, instead of becoming mini conferences!
A typical question in every workshop I co-organize is whether we (the organizers) are allowed to submit papers to the workshop.
I almost never do it but I have no problem with other co-organizers doing it (I’m assuming we are talking about real workshops, not about conferences disguised as workshops) with just one pragmatic condition: I don’t want this to add any extra work to my organization duties. By this I mean that I’m not going to manage papers from organizers outside EasyChair (or any other conference management system) to ensure the anonymity of the reviewers. This is not only a pain in the ass for me but also for the affected reviewers and the whole PC.
So, do you want to submit a paper to the workshop you’re organizing? Fine with me but then I´ll just trust that you’ll look the other way when your reviews start coming.
I’m sure any researcher is perfectly aware of the differences between Correlation and Causality (if not, read this and you´ll understand why banning Internet Explorer is not likely to stop murders in the US).
But the fact that we do know this doesn´t mean we are not tempted to forget it (causal relationships look great in papers!). In those situations, please take a look at the spurious correlations site, full of amazing data correlations (e.g. Nicholas cage appearances in films and people drowning or consumption of mozzarella cheese and number of civil engineering doctorates) for a a reality check!.
Now, jokes apart, make sure young students/researchers take a look at this site and we may avoid quite a few rejected papers in the future.
We recently completed a research work that ended with a bunch of negative results. Even if negative, we thought the results we obtained were valuable because, in our opinion, were not obvious (in fact we wanted to “prove” that the variables we studied were indeed a positive factor).
The next question was, so, where do we try to publish this?. In theory, anywhere. I mean I’ve never seen a conference or journal in Software Engineering explicitly forbidding you from submitting negative results. My problem is that I’ve not seen any that explicitly encourages you to submit them either (I’d love to be wrong so please please correct me if you can!).
There are quite a few journals in other areas specifically devoted to publishing negative results (e.g in BioMedicine, Ecology, Physics,…) but nothing in Computer Science except for this “Forum for Negative Results” published as a section of the Journal of Universal Computer Science. Nothing for Software Engineering.
That´s why I´m calling here the need for a Journal of Negative Results in Software Engineering. Anybody else thinks this would be a good idea?
(I´d also settle down for a more explicit encouragement, and acceptance, of negative results in existing conferences / journals; I do believe that many people are afraid of submitting their negative results and we are losing all those findings)
Lionel Briand and André van der Hoek (PC Chairs of ICSE 2014, for those working on other research areas, I think it’s safe to say that ICSE is the most well-known research conference on Software Engineering) have published his analysis of the peer-review process for ICSE 2014 on the following report:
Insights and Lessons Learned from Analyzing ICSE 2014 Survey and Review Data
The first paragraphs reads as follows: “This document reports on survey and review data collected during and after the ICSE 2014 review process. Its goal is to provide full insights and lessons learned about the reviewing and decision making process that led to the selection of 99 scientific articles (out of 495) for the technical research program.”
Mouthwatering right ? If you’re a researcher (even if in a different area) I’d say this is a unique (?), let me know if you know other similar reports!, opportunity to get a better perspective of how PCs make their decisions.
I’ve used EasyChair in all possible roles (author, reviewer, chair, proceedings manager,…) and Cyberchair as author and PC member. Based on this, I have no doubt, I’d choose EasyChair every time. From the login process (single account for all your conferences in easychair, a different account and login/pwd combination every time in Cyberchair) to the bidding, reviewing and discussion phase is much more intuitive (to me) in EasyChair. Moreover EasyChair is completely free (CyberChair not so much).
Still, many excellent conferences (MoDELS, ASE, ICSE, CAiSE,…) in my field use CyberChair so I may well be missing something. So, dear lazy web, my honest question to you is: “Can you give some reasons to choose CyberChair over EasyChair?”. Appreciated.
I took the time to count how many new reviews for the ECMFA’14 conference were uploaded each day to the easychair account for the conference.
The results, displayed below, are exactly what I was expecting (it’s also my own behaviour :-) ). Even if the reviewers had one full month to complete the reviews, 80% of them came in during the last week (day 30 in the graphic was the deadline, as part of the last week I count late reviews arrived days 31 and 32).
Even if this data is taken from a single conference (hey, this a rant blog post, not a scientific paper!) I’m sure you have the same feeling: no matter how much time you give to the reviewers, most of them they will always do the reviews last minute. If so, then, why do we need to give so much time to review (conference) papers? We could have a quicker turnaround (which should be one of the main benefits of sending a paper to a conference) if we just drastically cut the reviewing period and give just two weeks.
Based on our collective behaviour (there are always so many “urgents” things to do that we don´t plan, we just react so until we start getting the warning about upcoming deadlines we don´t put that on top of our to-do list) I don’t think the quality of the reviews would be worse than what we have now and authors would get their notifications earlier.
What do you think?