A little background here: I am co-author of a would-be paper, currently in Peer Review Limbo at a journal which shall remain nameless in order to protect the innocent and comfort the Affleck or however it goes. Reviewer #1 reckons it's a solid paper but wonders why we waste so much space reanalysing the data with these newfangled 'Maximum Likelihood" algorithms when everyone is familiar with the tried-and-true least-squares equivalents. Reviewer #2 sees the manuscript as a missed opportunity, concerning itself with clinical aspects of neurotoxicology when it should be exploring the possibilities of Maximum Likelihood methods at more length, e.g. the chance to calculate confidence contours around the values of parameters using Akaike's Information Criterion, and to compare nested models with differing degrees of freedom.
To count our blessings, at least there is no Reviewer #3. IT'S ALWAYS THE THIRD GODDAMN REVIEWER THAT SCREWS US OVER!
The slow, soul-destroying cycle of revise and resubmit, revise and resubmit is probably inevitable. For a journal specialising in applications the manuscript is top-heavy with abstruse questions of modelling, while it's overloaded with tedious clinical details for a journal about general methodology.
"We are falling between two stools," my co-author lamented in an e-mail.
"Even worse," I wrote back, confident in the knowledge that she cannot thump me all the way from Liverpool. "We are stalling between two fools."