This didn’t happen to a fried (or a friend of a friend…). This happened to me. I’m just removing personal details of the person that sent the email (“hate the deed, not the doer” or whatever works as the English translation of the Spanish saying “se dice el pecado pero no el pecador”).
Some time ago I was invited to co-author a research paper based on my expertise on ATL (the paper was about transforming from one modeling language to UML). I didn’t know the researcher that invited me but a quick look at his DBLP showed a respectable publication record and there was already another co-author that I did know and that was well-known in the community. So at that point I just asked him to send me a draft of the paper to get a better idea of what they were trying to do and evaluate my possible contribution.
Once I read the draft I got back expressing two serious concerns about the paper. In short:
- I couldn’t see why amybody would need that transformation.
- The transformation itself was straightforward (basically everything was a 1 to 1 mapping) so by itself it wasn’t a contribution either
This is the answer I got (minimally edited to preserve anonymity of all people involved). Judge by yourself what this says about how some researchers behave (and, to be clear, I don’t want to put all the blame on that specific person, I do believe that this publish or perish philosophy imposed by many evaluation agencies is to blame as well.
First regarding my concerns:
Yes, the case studies aren’t real. I have talked to Dr. X and Dr. Y who have great expertise in the area and I found out that Z is not really widespread in industry and therefore industrial case studies are almost impossible to come by. Unfortunately, this piece of information can a little too late.
Yes, the transformation is too straightforward. Not only that, but we are actually abstracting the model because Z are semantically richer UML activity diagrams.
And his reaction to them:
Our aspirations need to curbed. This will not turn into an IEEE TSE, TOSEm or even IST paper. It may fly in other lower level journals. I am thinking JSS or Sosym (or even lower), solely for the fact that these journals advocate modeling for the sake of modeling (especially Sosym which is purely a modeling journal).
So basically, my concerns are true but they are not a problem per se but only a problem in the sense that may prevent the paper to be published in top journals. And now his proposal
Given the drawbacks of the paper, a great deal of thinking needs to be put in deciding which journal to submit too. The choice of the journal will greatly affect the chances of success. The strategy is to beef up as many sections of the paper as possible. The goal here is to blow away the reviewers via complexity. This may work for Sosym because the reviewers will not care much for the motivation nor will they dwell over the non-real-world case studies. I have published 3 papers in Sosym that had a similar situation, including two papers with validation sections comprising of only exemplars. We can also use the complexity to target softer journals (i.e. information systems journals). IS journal reviewers I am sure will be taken by the complexity but they may be smart enough to see a lack of motivation.
In short, let’s try to fool the reviewers in believing we have actually something in our hands!. Love his consideration for IS reviewers that could be smart enough to caught our lies! Needless to say I kindly declined to participate in the paper but I can’t help but feeling angry with the situation. These are tough times to get research funding and this kind of behaviour is screwing us up even more.
Anyone with similar personal experiences?