How will you aggregate the preferences?
We are working with experts in voting theory to identify and customize an aggregation mechanism (referred to as social choice function or welfare function in the literature) fit for the needs of . While our survey may appear as a relatively classic example of “positional voting”, which has a long history of analysis in the literature on voting, the aggregation problem we deal with has some unique features (notably, that each voter compares a different set of options).
But the Condorcet paradox states…

Sure, it is a classic, and fascinating, impossibility result in voting theory. However, owing to the large number of respondents, the practical implications for our application are limited. Should we find that collective preferences occasionally feature cycles, it would actually be an interesting scientific result. But this said, voting systems were already described, and in some cases used in the real world, that explicitly tackle and solve the problem of cycles.
Who has time for yet another survey?
It is true, as researchers (and citizens in general) we are asked to participate in more and more surveys. While we think our survey will be particularly interesting, and quick, to take, we ultimately do not expect researchers to take it because “it is fun”. Rather, we think that many researchers are genuinely interested in helping shape, with their vote, a much needed tool in the world of scientific publishing. Researchers spend hours, or even days, each month doing unpaid peer review of manuscripts for journals. We ask to spend five minutes in evaluating the journals themselves – knowing that their opinion matters.
What scientific domains does cover?
Every aspect of the survey and of the aggregation procedure was designed, in interaction with colleagues from other disciplines and from the industry, to be potentially applicable to any scientific discipline. The only likely exception is related not to the scientific topic, but rather to the way the scientific endeavor is organized: in fields with very large collaborations (an extreme example being articles in high energy physics with hundreds, or thousands, of authors), the assumption that any researcher knows, at least to some extent, the quality of journals they publish on might be shaky. More in general, our approach will only provide meaningful results in disciplines where the survey gathers some minimum critical mass of respondents – but for instance, even just a few dozens might be enough to compare a set of close knit journals.
A more practical limitation is that as of now, may not be able to cover journals without an ISSN, or not included in the main bibliographic databases (Scopus/Web of Science/Openalex).