WordPress research challenge – Show the world your research is relevant

Read my request for help to the members of the software reserach community to show to the huge community of WordPress users that our techniques/tools can actually be useful to improve any aspect of the WordPress project.

This is what I´ll try to “sell” in my upcoming talk on WordCamp Europe where my goal is to this little experiment of approaching our reserach work to a large community of practitioners and see if we can understand each other better.

Posted in industry relations, tools | Tagged , | Leave a comment

MetaScience – A tool to analyze research conferences

Announcing the release of our new MetaScience online service that we have developed to help on the critical matter of evaluating the quality of conferences using metrics that are not usually available (unless you take the time to calculate them yourself).

The current version relies on the database provided by DBLP, to derive some useful metrics for conferences and workshops. Such metrics show:

  • Conference activity. It provides the overall number of authors and papers for each conference edition.


  • Conference ratios. It presents the number of authors per paper and papers per author for each edition.


  • Community turnover. Following the popular expression publish or perish, it calculates the percentage of authors that survived/perished between the editions of the conference. In particular, the user can select a unit of time that spans between two consecutive editions or three.


  • Openness. It measures how much the community underlying a conference is open towards newcomers. Thus, for each edition it presents the ratio between papers coming from authors that have never published in the conference before (outsiders) as well as the papers with all authors having published there already (community member).


The service is still under development and we are currently working on many other visualizations/metrics. Feel free to have your saying by participating in our GitHub repository

Posted in evaluating research, tools | Tagged | Leave a comment

Negative Results in Empirical Software Engineering – EMSE Special Issue

emseSome time ago, we discussed the need for a Journal of Negative Results in Software Engineering . Well, today, we’re not yet announcing the creation of such journal but at least a first step towards increasing the awareness of the importance of publishing negative results: we are going to publish a Special Issue on “Negative Results in Empirical SE” in the Empirical Software Engineering Journal.

More details about the call for papers

Posted in Uncategorized | Leave a comment

RR: Shit my reviewers say

Very funny (and sad at the same time) collection of reviewer’s comments on research papers on http://shitmyreviewerssay.tumblr.com/ (also on twitter at yourpapersucks).

The comments mix criticisms on what probably are really bad papers

The best thing about the paper in its current form is that is that it is [sic] short, so I did not waste a lot of time reading it;

This kind of prose simply borders on cruelty against the reader. And finally comes the conclusion, which is the intellectual equivalent of bubblegum.

Did all 5 authors say,“Yes, this is a piece of work I am proud to have my name on?”

together with comments that don’t say much about the quality of the reviewers themselves

The orgnization and writing of the paper need to improve. There are some grammar errors need to correct.

Can you explain this part a bit further, but without going into detail.

The reported mean of 7.7 is misleading because it appears that close to half of your participants are scoring below that mean

This paper reads like a woman’s diary, not like a scientific piece of work’

I don’t believe in simulations

and a few desperate cries for help of editors dealing with those reviewers

You will see that Reviewer 2 has slightly missed the point, so please don’t pay too much attention to their comments in your revision.

Feel free to share your worst reviews! (and submit them to this site if you wish)

Posted in evaluating research, funny | Leave a comment

Conference websites do not need to be boring

And the web for the JSConf Latin America is the best example I’ve seen of that so far:

JS Conf Latin America

Posted in organization | Tagged , | Leave a comment

An email I got with all the wrong reasons to publish a paper

This didn’t happen to a fried (or a friend of a friend…). This happened to me. I’m just removing personal details of the person that sent the email (“hate the deed, not the doer” or whatever works as the English translation of the Spanish saying “se dice el pecado pero no el pecador”).

Some time ago I was invited to co-author a research paper based on my expertise on ATL (the paper was about transforming from one modeling language to UML). I didn’t know the researcher that invited me but a quick look at his DBLP showed a respectable publication record and there was already another co-author that I did know and that was well-known in the community. So at that point I just asked him to send me a draft of the paper to get a better idea of what they were trying to do and evaluate my possible contribution.

Once I read the draft I got back expressing two serious concerns about the paper. In short:

  1. I couldn’t see why amybody would need that transformation.
  2. The transformation itself was straightforward (basically everything was a 1 to 1 mapping) so by itself it wasn’t a contribution either

This is the answer I got (minimally edited to preserve anonymity of all people involved). Judge by yourself what this says about how some researchers behave (and, to be clear, I don’t want to put all the blame on that specific person, I do believe that this publish or perish philosophy imposed by many evaluation agencies is to blame as well.

First regarding my concerns:

Yes, the case studies aren’t real. I have talked to Dr. X and Dr. Y who have great expertise in the area and I found out that Z is not really widespread in industry and therefore industrial case studies are almost impossible to come by. Unfortunately, this piece of information can a little too late.
Yes, the transformation is too straightforward. Not only that, but we are actually abstracting the model because Z are semantically richer UML activity diagrams.

And his reaction to them:

Our aspirations need to curbed. This will not turn into an IEEE TSE, TOSEm or even IST paper. It may fly in other lower level journals. I am thinking JSS or Sosym (or even lower), solely for the fact that these journals advocate modeling for the sake of modeling (especially Sosym which is purely a modeling journal).

So basically, my concerns are true but they are not a problem per se but only a problem in the sense that may prevent the paper to be published in top journals. And now his proposal

Given the drawbacks of the paper, a great deal of thinking needs to be put in deciding which journal to submit too. The choice of the journal will greatly affect the chances of success. The strategy is to beef up as many sections of the paper as possible. The goal here is to blow away the reviewers via complexity. This may work for Sosym because the reviewers will not care much for the motivation nor will they dwell over the non-real-world case studies. I have published 3 papers in Sosym that had a similar situation, including two papers with validation sections comprising of only exemplars. We can also use the complexity to target softer journals (i.e. information systems journals). IS journal reviewers I am sure will be taken by the complexity but they may be smart enough to see a lack of motivation.

In short, let’s try to fool the reviewers in believing we have actually something in our hands!. Love his consideration for IS reviewers that could be smart enough to caught our lies! Needless to say I kindly declined to participate in the paper but I can’t help but feeling angry with the situation. These are tough times to get research funding and this kind of behaviour is screwing us up even more.

Anyone with similar personal experiences?

Posted in funny, publishing | Leave a comment

Where am I supposed to submit complete research works?

The other day we tried to submit a journal paper summarizing the PhD work of one of my students. He had published a couple of conference papers and one workshop paper covering specific parts of his work and, after the PhD, he wanted to write a journal paper presenting the complete method. This is, I would say, a typical publication pattern.

The problem came when we tried to submit this paper to a journal. The paper was long, to be precise, 16.000 words long. This didn´t seem like a problem to me. This was the complete overview of 3-year work period and, IMHO, deserved this space in order to provide enough details of all the components and the relationships between them (remember that a typical 15-page LNCS paper is around 5.000 words so we’re talking about a paper just triple that size).

But, as usual, I was wrong. Journals do not seem interested in publishing high quality works regardless their size (not saying that mine was a high-quality work, but they don´t know either, they didn´t take the time to check). Journals just want to publish the more papers the better. The editor-in-chief immediately replied

The paper comes across as very long. This means that it will be difficult to find reviewers for the paper. Furthermore, the contribution of the paper will be judged in relation to its length. I would strongly suggest that you try to shorten the paper.

In a posterior email, the editor clarified that by shorten he meant no more than 11.000 words so basically he was asking to remove one third of the paper.

And it turns out this is a common requirement. The ACM TOSEM journal says:

Extremely long submissions — as a general rule, those that exceed approximately 11,000 words — may be returned without review at the discretion of the editor-in-chief. If placed into the review process, such submissions are not guaranteed review or publication in a timely fashion.

Since when a 11.000 words paper is an extremely long submission? Researchers always say that it´s a bad practice to publish only small increments over previously published works but journals are forcing us to do exactly that and stick to the minimum 30% novelty rule. Specially now that conference papers are following the complete opposite path and get larger and larger (many conferences now accept papers up to 18-20 pages in LNCS format).

And yes, there is an answer to my initial question. I could publish the complete research work in an open repository like arXiv but, unfortunately, that’s not a valid option for my student.

(btw, if you wondered what happened with the paper, we managed to reduce it to 12.500 words, it’s now a worse paper but hopefully still good enough to be published, and at least this time the editor sent it out for review).

Posted in publishing | Leave a comment