Negative Results in Empirical Software Engineering – EMSE Special Issue

emseSome time ago, we discussed the need for a Journal of Negative Results in Software Engineering . Well, today, we’re not yet announcing the creation of such journal but at least a first step towards increasing the awareness of the importance of publishing negative results: we are going to publish a Special Issue on “Negative Results in Empirical SE” in the Empirical Software Engineering Journal.

More details about the call for papers

Posted in Uncategorized | Leave a comment

RR: Shit my reviewers say

Very funny (and sad at the same time) collection of reviewer’s comments on research papers on (also on twitter at yourpapersucks).

The comments mix criticisms on what probably are really bad papers

The best thing about the paper in its current form is that is that it is [sic] short, so I did not waste a lot of time reading it;

This kind of prose simply borders on cruelty against the reader. And finally comes the conclusion, which is the intellectual equivalent of bubblegum.

Did all 5 authors say,“Yes, this is a piece of work I am proud to have my name on?”

together with comments that don’t say much about the quality of the reviewers themselves

The orgnization and writing of the paper need to improve. There are some grammar errors need to correct.

Can you explain this part a bit further, but without going into detail.

The reported mean of 7.7 is misleading because it appears that close to half of your participants are scoring below that mean

This paper reads like a woman’s diary, not like a scientific piece of work’

I don’t believe in simulations

and a few desperate cries for help of editors dealing with those reviewers

You will see that Reviewer 2 has slightly missed the point, so please don’t pay too much attention to their comments in your revision.

Feel free to share your worst reviews! (and submit them to this site if you wish)

Posted in evaluating research, funny | Leave a comment

Conference websites do not need to be boring

And the web for the JSConf Latin America is the best example I’ve seen of that so far:

JS Conf Latin America

Posted in organization | Tagged , | Leave a comment

An email I got with all the wrong reasons to publish a paper

This didn’t happen to a fried (or a friend of a friend…). This happened to me. I’m just removing personal details of the person that sent the email (“hate the deed, not the doer” or whatever works as the English translation of the Spanish saying “se dice el pecado pero no el pecador”).

Some time ago I was invited to co-author a research paper based on my expertise on ATL (the paper was about transforming from one modeling language to UML). I didn’t know the researcher that invited me but a quick look at his DBLP showed a respectable publication record and there was already another co-author that I did know and that was well-known in the community. So at that point I just asked him to send me a draft of the paper to get a better idea of what they were trying to do and evaluate my possible contribution.

Once I read the draft I got back expressing two serious concerns about the paper. In short:

  1. I couldn’t see why amybody would need that transformation.
  2. The transformation itself was straightforward (basically everything was a 1 to 1 mapping) so by itself it wasn’t a contribution either

This is the answer I got (minimally edited to preserve anonymity of all people involved). Judge by yourself what this says about how some researchers behave (and, to be clear, I don’t want to put all the blame on that specific person, I do believe that this publish or perish philosophy imposed by many evaluation agencies is to blame as well.

First regarding my concerns:

Yes, the case studies aren’t real. I have talked to Dr. X and Dr. Y who have great expertise in the area and I found out that Z is not really widespread in industry and therefore industrial case studies are almost impossible to come by. Unfortunately, this piece of information can a little too late.
Yes, the transformation is too straightforward. Not only that, but we are actually abstracting the model because Z are semantically richer UML activity diagrams.

And his reaction to them:

Our aspirations need to curbed. This will not turn into an IEEE TSE, TOSEm or even IST paper. It may fly in other lower level journals. I am thinking JSS or Sosym (or even lower), solely for the fact that these journals advocate modeling for the sake of modeling (especially Sosym which is purely a modeling journal).

So basically, my concerns are true but they are not a problem per se but only a problem in the sense that may prevent the paper to be published in top journals. And now his proposal

Given the drawbacks of the paper, a great deal of thinking needs to be put in deciding which journal to submit too. The choice of the journal will greatly affect the chances of success. The strategy is to beef up as many sections of the paper as possible. The goal here is to blow away the reviewers via complexity. This may work for Sosym because the reviewers will not care much for the motivation nor will they dwell over the non-real-world case studies. I have published 3 papers in Sosym that had a similar situation, including two papers with validation sections comprising of only exemplars. We can also use the complexity to target softer journals (i.e. information systems journals). IS journal reviewers I am sure will be taken by the complexity but they may be smart enough to see a lack of motivation.

In short, let’s try to fool the reviewers in believing we have actually something in our hands!. Love his consideration for IS reviewers that could be smart enough to caught our lies! Needless to say I kindly declined to participate in the paper but I can’t help but feeling angry with the situation. These are tough times to get research funding and this kind of behaviour is screwing us up even more.

Anyone with similar personal experiences?

Posted in funny, publishing | Leave a comment

Where am I supposed to submit complete research works?

The other day we tried to submit a journal paper summarizing the PhD work of one of my students. He had published a couple of conference papers and one workshop paper covering specific parts of his work and, after the PhD, he wanted to write a journal paper presenting the complete method. This is, I would say, a typical publication pattern.

The problem came when we tried to submit this paper to a journal. The paper was long, to be precise, 16.000 words long. This didn´t seem like a problem to me. This was the complete overview of 3-year work period and, IMHO, deserved this space in order to provide enough details of all the components and the relationships between them (remember that a typical 15-page LNCS paper is around 5.000 words so we’re talking about a paper just triple that size).

But, as usual, I was wrong. Journals do not seem interested in publishing high quality works regardless their size (not saying that mine was a high-quality work, but they don´t know either, they didn´t take the time to check). Journals just want to publish the more papers the better. The editor-in-chief immediately replied

The paper comes across as very long. This means that it will be difficult to find reviewers for the paper. Furthermore, the contribution of the paper will be judged in relation to its length. I would strongly suggest that you try to shorten the paper.

In a posterior email, the editor clarified that by shorten he meant no more than 11.000 words so basically he was asking to remove one third of the paper.

And it turns out this is a common requirement. The ACM TOSEM journal says:

Extremely long submissions — as a general rule, those that exceed approximately 11,000 words — may be returned without review at the discretion of the editor-in-chief. If placed into the review process, such submissions are not guaranteed review or publication in a timely fashion.

Since when a 11.000 words paper is an extremely long submission? Researchers always say that it´s a bad practice to publish only small increments over previously published works but journals are forcing us to do exactly that and stick to the minimum 30% novelty rule. Specially now that conference papers are following the complete opposite path and get larger and larger (many conferences now accept papers up to 18-20 pages in LNCS format).

And yes, there is an answer to my initial question. I could publish the complete research work in an open repository like arXiv but, unfortunately, that’s not a valid option for my student.

(btw, if you wondered what happened with the paper, we managed to reduce it to 12.500 words, it’s now a worse paper but hopefully still good enough to be published, and at least this time the editor sent it out for review).

Posted in publishing | Leave a comment

Scientists’ (sad) behaviour as seen by Team Geek book

I’m really enjoying the book Team Geek: A Software Developer’s Guide to Working Well with Others and I strongly recommend it to any software developer out there but that´s not why I’m mentioning here.

I’m doing it because it includes a comparison between professional scientists and software developers as a way to convince software developers not to work alone and instead join the open source movement:

Professional science is supposed to be about the free and open exchange of information. But the desperate need to “publish or perish” and to compete for grants has had exactly the opposite effect. Great thinkkers don´t share ideas. They cling to them obsessively, do their research in private, hide all mistakes a long the path and then ultimately publish a paper making it osund like the whole process was effortless and obvious. And the results are often disastrous: they accidentally duplicated someone else´s work or made an undetected mistake early on … The amount of wasted time and effort is tragic

(and of wasted public money I´d add)

True, nothing really new here (I already touched this same topic in the post: “Be honest, curing cancer is not your primary goal“) but it surprised me that the same perception was shared by people outside our community. I’d say this is a good thing, the more pressure we have to change the way research is done, the better.

Posted in doing research, publishing | Leave a comment

What stats would you like to know for every conference?

In the opening session of any conference, the PC Chairs give a brief presentation of the conference. This typically includes informing about the number of abstracts and full papers submitted, the number of papers accepted, the corresponding acceptance rate and some kind of map / graphic displaying the same information by countries.

Usually, that’s it. For the opening session of ECMFA’14 we wanted to give some more (hopefully interesting) data. In the end, due to time constraints, what we gave as additional data was:

  1. Percentage of accepted papers where none of the authors was (or had been in the last four years) a PC member of the conference
  2. Acceptance rates of papers were at least one author was (or had been) a PC member
  3. Number of papers where none of the authors had participated in the community (as a PC member or author) before (again “before” means in the last 4 years)

With the first two we wanted to show that you didn’t need to be a PC member to get your paper in the conference (for ECMFA 42% of papers were from non-PC members) and that having a PC member as co-author did not increase dramatically the probability of getting your paper accepted (acceptance rate for PC co-authored papers was only 10% higher than acceptance rates for papers with no PC member). The third was a way to see how endogamic was the conference (turned out to be quite a lot since only one paper had a complete set of “fresh” authors).

Can I ask you if you would like all conferences to include these three stats in their presentation? And, regardless of your answer, what other data/statistic would you like to know about a confenrence?

Posted in organization, publishing | 15 Comments