Nice try. You don’t get to decline 🙂


Please download the manuscript here.

Below are the review instructions taken directly from the eLife website. Please follow the instructions carefully and submit your review here.


1. The Review Process

eLife’s editorial process produces an assessment by peers designed to be posted alongside the preprint for the benefit of readers, and provides detailed feedback to help authors revise and improve their manuscript.

We also use these reviews to select for inclusion in eLife those papers that we judge to be methodologically and scientifically rigorous, ethically conducted, objectively presented according to the appropriate community standards, and of broad interest.

These guidelines are most relevant to reviews of Research Articles, Short Reports, Tools and Resources, and Research Advances. (Please refer to the invitation email and review form for guidance around reviews of Review Articles, Feature Articles, and Scientific Correspondence.)

eLife reviews have three parts, each of which is described in detailed below:

  1. An evaluation summary (in two or three sentences) that captures the major conclusions of the review in a concise manner accessible to a wide audience.
  2. A public review that details the strengths and weaknesses of the manuscript before you, and discusses whether the authors’ claims and conclusions are justified by their data.
  3. A set of private recommendations for the authors that outline how you think the science and its presentation could be strengthened.

The public review will be published alongside the preprint, along with a response from the authors if they choose to produce one. Reviewers retain copyright of the public review and consent to eLife publishing the public review using a Public Domain Dedication, which allows the review to be freely reused by anyone for any purpose. Please note that the authors have control over when the public reviews are posted.

Review Consultation, Decision and Subsequent Steps

After all reviews are submitted, the complete reviews (all three sections) will be shared with the reviewers and the Reviewing Editor, and a consultation session will begin. During the consultation session the group discusses the paper’s strengths and weaknesses, and an effort is made to resolve any areas of disagreement and arrive at a consensus decision as to whether the paper is suitable for publication in eLife.

Papers receiving a revise decision

If the paper is deemed suitable for eLife, the Reviewing Editor drafts a decision letter, in consultation with the reviewers, that enumerates the issues that the authors must address in a revision before publication. This will be sent to the authors, along with the public portion of each review and any remaining recommendations from the reviews not addressed in the decision.

If the authors decide to proceed with a revision, we will post the public portions of each review alongside the preprint, and on our sciety platform, unless authors ask for them to be postponed until publication, either at eLife or elsewhere.

Once a revision is received, the Reviewing Editor will determine, following consultation with the original reviewers if they deem it necessary, if the authors have addressed the issues raised in the review. When they have, the paper will be formally accepted for publication by eLife. It will enter into our normal production process, and the preprint will be updated to indicate that the paper has been accepted with a link to the published version.

Papers we decline to publish

If the reviewers decide that the paper is not appropriate for eLife, the Reviewing Editor drafts a decision letter in consultation with the reviewers that explains the reasons for this decision, with the individual reviews (all three sections) attached. We hope and expect that many authors whose papers are rejected will nonetheless find the reviews constructive and suitable for posting alongside their preprint. Because these reviews will be posted under the eLife banner, they can be used directly by other journals as the basis for their own publishing decisions.

However, we realize that some authors may be reluctant to participate in a system that they fear might compromise their ability to get their work published if it is not accepted by eLife. We will therefore give authors the option to delay posting our reviews until their paper has been accepted for publication in another journal. Because all reviews will ultimately be publicly posted, we hope that most authors will be motivated to address the concerns raised during eLife’s review process before it is published.

Implicit Bias

Implicit bias (unconscious associations that affect our actions) has repeatedly been shown to influence decisions in scholarly publishing, especially with respect to author gender, career stage, nationality and other social groupings. To help increase awareness of what implicit bias is and how it might affect the eLife review process we encourage editors and reviewers to consult resources such as Project Implicit and Outsmarting Human Minds.

Involvement of Early-Career Researchers in Peer Review

eLife encourages editors to nominate and involve early-career researchers in the review process. The eLife pool of early-career reviewers aims to provide outstanding early-stage researchers the opportunity to peer review manuscripts. Members of the pool are nominated and/or approved by the eLife editors.

To be eligible, researchers have to be either a postdoctoral researcher or have spent less than five years in an independent research position (e.g. Group Leader). It is also a requirement to have had at least two first-author publications in an area of research within the scope of eLife. Researchers wishing to be considered for this pool should contact the journal staff (editorial@elifesciences.org) and provide:

  • A brief letter of endorsement from their supervisor (optional)
  • Their CV
  • The link to their webpage or, at a minimum, another site with details about their work (e.g. Google Scholar, Publons or ORCID)
  • Two representative first-author publications
  • 4-8 keywords and 1-3 subject areas of eLife relevant to their research, and
  • A short list of eLife Reviewing Editors they could work with

Self-nominations will be reviewed by at least one eLife Reviewing Editor.

eLife also encourages reviewers to involve early-career colleagues as co-reviewers, and we enable all reviewers to receive credit for their contributions through services such as Publons and ORCID.

2. Reviewing Policies

We encourage our reviewers to familiarise themselves with eLife’s journal policies before the review process commences.

Confidentiality and co-review

The review process is strictly confidential and must be treated as such by reviewers during the review process and subsequently. However, co-reviewing a manuscript with a single experienced junior colleague can be an important learning experience that we are happy to support. To provide accountability and appropriate credit, the name of the co-referee should be disclosed to the editors in advance and we would encourage all reviewers to consider sharing their names with the authors. The two co-reviewers should agree on the wording of the review, and the same principles relating to confidentiality and competing interests apply to both reviewers. The senior reviewer should be the main point of contact for the discussion between the reviewers, but the senior reviewer can confer with their co-reviewer during this discussion. Other than co-reviewing for training purposes, reviewers should not contact anyone not directly involved with the assessment of the article, including colleagues or other experts in the field, unless this has been discussed and approved in advance by the Reviewing Editor.

The content of the consultation session between the reviewers is also confidential and it is the role of the Reviewing Editor to draft the decision letter, based on the reviews and the discussion between the reviewers.

Reviewer Anonymity

We do not release the identities of the reviewers to the authors (unless requested by the reviewers themselves) but in the course of the discussion that forms part of the review process, each reviewer will know the identity of the other reviewer(s). We also request each reviewer’s permission to reveal their identity and report to another journal, if the work is rejected and the author requests the reports for the purposes of submission to another journal

After much discussion, we have decided not to reveal review identities on public reviews. There are good arguments in favor of revealing the identities of all reviewers, especially the possibility that such transparency will discourage poor reviewer behavior and increase confidence in the process. However, we have heard from many scientists, especially those in early stages of their careers or otherwise in vulnerable positions, who would not feel comfortable writing honest reviews of the work of more senior colleagues if they had to identify themselves. Thus all public reviews posted to preprints will be signed by eLife and not by individuals, putting the onus on us as an organization and community to ensure that our reviews are of the highest quality and ethics. Reviewers will still have the option to be named to the authors in the decision letter, and if so their names will also be listed within the published paper if it is published by eLife.

Competing Interests

We ask reviewers to recognise potential competing interests that could lead them to be positively or negatively disposed towards an article. We follow the recommendations of the ICMJE and the guidance provided by PLOS. Reviewers should inform the editors or journal staff if they are close competitors or collaborators of the authors. Reviewers must recuse themselves if they feel that they are unable to offer an impartial review. Common reasons for editors and reviewers to recuse themselves from the peer-review process include but are not limited to:

  • Working at the same institution or organization as one or more of the authors, currently or recently
  • Having collaborated with, or served as a mentor to, one or more of the authors during the past 5 years
  • Serving on the advisory board of the institution or organization of one or more of the authors
  • Having held grants with one or more of the authors, currently or recently
  • Having a personal relationship with an author that does not allow an objective evaluation of the manuscript

We will make every effort to follow authors’ requests to exclude potential reviewers, provided that a specific reason is provided.

Accusations of Misconduct

eLife is a member of the Committee on Publication Ethics (COPE), supports their principles, and follows their flowcharts for dealing with potential breaches of publishing ethics. Reviewers are asked not to make allegations of misconduct within the review itself or within the online consultation, but in the event of concerns about potential plagiarism, inappropriate image manipulation, or other forms of misconduct, reviewers should alert the journal’s editorial staff in the first instance. The editorial staff will consult the Senior Editor and Reviewing Editor, and consider the concerns further.

Research Conducted by eLife

As a way of improving our services, we periodically undertake research and surveys relating to eLife’s submission and review process. Where appropriate we will share our findings so that others can benefit. Participation does not affect the decision on manuscripts under consideration, or our policies relating to the confidentiality of the review process. If you would like to opt out of eLife’s research and/or surveys, please contact the journal office (here).

3. Writing the Review

eLife is a selective journal that publishes promising research in all areas of biology and medicine. Articles must be methodologically and scientifically rigorous, ethically conducted, and objectively presented according to the appropriate community standards.

The review form includes the following sections.

1. Evaluation Summary

The evaluation summary is a concise (ideally two or three sentence) summary of the reviewer’s assessment of the work and its likely impact in the field, outside the field, or in society. This summary should be easily readable by non-experts, and should clearly convey the judgment of the reviewer about whether the papers’ primary claims are supported by the data, and to whom the manuscript will be of interest or use.

The evaluation summary should not repeat the abstract of the paper, it should be easily readable by non-experts, and should not use field-specific jargon or abbreviations. As part of the review consultation, the Reviewing Editor will compose a single summary that the authors could include on a CV or a job or grant application as an independent, external assessment of the impact of their work.

A set of example evaluation summaries are available below.

2. Public Review

The public review, which will be posted alongside the manuscript on the preprint server where it resides, is the main section of our review. It contains the detailed assessment of the reviewer of the work, written primarily for others reading the paper or considering using its methods, data or conclusions.

Our goal is not simply to take existing peer reviews and post them online. Rather we want to change how we construct and write peer reviews to make them useful to both authors and readers in a way that better reflects the work you as a reviewer put into reading and thinking about a paper.

We expect the bulk of a reviewers’ thoughts and judgment about a paper will be in the public review, with only specific types of comments reserved for private correspondence with the authors.

We recommend that public reviews contain:

  • A summary of what the authors were trying to achieve.
  • An account of the major strengths and weaknesses of the methods and results.
  • An appraisal of whether the authors achieved their aims, and whether the results support their conclusions.
  • A discussion of the likely impact of the work on the field, and the utility of the methods and data to the community.
  • Any additional context you think would help readers interpret or understand the significance of the work.

The public nature of these reviews means that they:

  • Should be clear about any technical and conceptual concerns, but should also be written in a serious and constructive manner appropriate for a public audience, and mindful of the impact language choices might have on the authors.
  • Address the entire paper, not just individual points or sections.
  • Highlight how and where the authors succeeded, where there are useful data, and where there are major conceptual and technical advances.

The public review is an assessment of the manuscript in its current form, and is independent of any particular publishing decision. They should therefore not include:

  • Comments on the appropriateness of the manuscript for publication in eLife or speculation about where it should be published.
  • Suggestions on how to improve the science or the manuscript, especially those directed at increasing its impact, except where they help to convey points raised in the review to readers.
  • Open-ended questions.
  • Discussion of minor points or issues of presentation except where they may lead to confusion about the major points of the paper.

Examples of public reviews can be found below. Please note that we may lightly edit the reviews for tone and consistency prior to posting.

3. Recommendations for the authors

Although most aspects of a reviewer’s assessment belong in the public review, we reserve some types of comments and queries for a private set of recommendations to the authors, which will be communicated separately to the authors, editor and other reviewers.

This section should include:

  • Suggestions for improved or additional experiments, data or analyses, especially those directed at increasing the impact of the work and making it suitable for eLife.
  • Recommendations for improving the writing and presentation.
  • Minor corrections to the text and figures.

Information that is in the public review should not be repeated here. For example, as a general rule, concerns about a claim not being justified by the data should be explained in the public review, while specific suggestions for how the authors might address these concerns should be placed here.

However, we recognize that in some cases the best way to explain perceived flaws in an experiment or analysis is to suggest a better one, and in such cases the recommendation should be in the public review. Similarly, if there are issues of writing and presentation that are likely to lead to confusion or the drawing of incorrect conclusions by readers, these should be raised in the public review.

This section should also list any issues the authors need to address about the availability of data, code, reagents, research ethics, or other issues pertaining to the adherence of the manuscript to eLife’s publishing policies.

Examples of Evaluation Summaries

Evaluation summaries serve two purposes. First, they provide a concise guide as to whether a reader should be interested in reading the paper. Second, they provide an evaluation that authors can use, for example in tenure evaluations, job searches, or grant applications.

Example 1: This manuscript is of broad interest to readers in the field of reinforcement learning, decision making and the frontal cortex. The identification of a unique contribution of Orbitofrontal Cortex in learning but not choice is an important contribution to our understanding of the brain region. A combination of sophisticated modelling and precise causal interventions compellingly support the key claims in the paper.

Example 2: This paper will be of interest to the large class of neuroscientists who perform extracellular electrophysiology. It sets a new standard for automatically separating signals from different cells, and is therefore likely to be taken up broadly by the field. It performs extensive comparisons against the existing state of the art that support the major claims of the paper.

Example 3: The premise behind this manuscript is important and timely both for auditory neuroscientists and for informing technology. The data support the conclusions of the manuscript within the current context, but do not yet support generalising the conclusions to related contexts, such as other auditory stimuli or more naturalistic listening conditions.

Example 4: This paper will be of interest to cognitive neuroscientists who perform subsequent memory experiments with fMRI. It provides useful technical advice for the analysis of such data. The key claims of the manuscript are well supported by the data, and the approaches used are thoughtful and rigorous.

Example 5: This paper is of potential interest to a broad audience of neuroscientists, as it implies a major adjustment to our current understanding of cortical circuitry. The data quality is unusually high. However, reasonable alternative explanations can be identified such that the data do not strongly favor the preferred hypothesis put forward by the authors.

Example 6: This paper is of interest to scientists within the field of motor control. The data analysis is rigorous and the conclusions are justified by the data. The key claims of the manuscript are directly related to, and support, previous known findings.

Example 7: This paper will be of interest to scientists across systems neuroscience, and has high clinical relevance. It reveals a novel circuit mechanism underlying impulsive behaviour. A series of compelling experimental manipulations dissect the circuit, conclusively supporting the key claims of the paper.

Examples of Public Reviews

1) This is a review of Concerted action of kinesins KIF5B and KIF13B promotes efficient secretory vesicle transport to microtubule plus ends

Evaluation summary:

This paper is of interest for cell biologists studying intracellular transport. The work provides substantial new insight into competition and cooperation among molecular motors during intracellular cargo transport and clarifies the contribution of distinct classes of motor complexes. Overall, the data are properly controlled and analysed, although a few aspects of imaging and data analysis could be worked out better.

Public review:

Serra-Marques, Martin et al. investigated the individual and cooperative roles of specific kinesins in transporting Rab6 secretory vesicles in HeLa cells using CRISPR/Cas knockouts and live-cell imaging. They find that both kinesin-1 KIF5B and kinesin-3 KIF13B cooperate in transporting Rab6 vesicles, but Eg5 and other kinesin-3s (KIF1B and KIF1C) are dispensable for Rab6 vesicle transport. They show that both KIF5B and KIF13B localize to these vesicles and coordinate their activities such that KIF5B is the main driver of the cargos on older, MAP7-decorated microtubules, and KIF13B takes over as the main transporter on freshly-polymerized microtubule ends that are largely devoid of MAP7. Interestingly, the data also indicate that KIF5B is important for controlling Rab6 vesicle size, which KIF13B cannot rescue. By performing a technically impressive analysis of the the motor distribution on vesicle with subpixel resolution, the authors find that the motors localize to the front of the vesicle when driving transport, but upon directional cargo switching, KIF5B but not KIF13B localizes to the back of the vesicle when opposing dynein.These data add in an interesting way to the ongoing discussion on whether motors of opposite polarity present on the same cargo engage in a tug-of-war.

The conclusions of this paper are mostly well supported by data, but some aspects of image acquisition and data analysis need to be clarified and extended.

1) The metrics used to quantify motility are sensitive to tracking errors and uncertainty. The authors quantify the number of runs (Figure 2D,F; 7C) and the average speed (Figure 3A,B,D,E,H). The number of runs is sensitive to linking errors in tracking. A single, long trajectory is often misrepresented as multiple shorter trajectories. These linking errors are sensitive to small differences in the signal-to-noise ratio between experiments and conditions, and the set of tracking parameters used. The average speed is reported only for the long, processive runs (tracks>20 frames, segments<6 frames with velocity vector correlation >0.6). For many vesicular cargoes, these long runs represent <10% of the total motility. In the 4X-KO cells, it is expected there is very little processive motility, yet the average speed is higher than in control cells. Frame-to-frame velocities are often overestimated due to the tracking uncertainty. To make their results more solid, the authors should have used additional metrics of vesicle motility. For example, they could have used metrics such as mean-squared displacement, which are less sensitive to tracking errors. The authors should have also provided either the average velocity of the entire run (including pauses), or the fraction of time represented by the processive segments to aid in interpreting the velocity data.

2) Adding control experiments to assess crosstalk between fluorescence images would be needed to increase confidence in the presented colocalization results.

3) The data on KIF13B-380 motility presented in Figure 8G is not sufficiently convincing. The tracks for KIF13B-380 motility are difficult to see, which is surprising as KIF13B has been shown to be a super-processive motor. Better data would have helped to substantiate the authors’ conclusions.

Evaluation summary:

The method is potentially of broad interest across different domains of cognitive neuroscience. A domain-general replay detection method would be of wide interest and utility. However, in its current form, the paper is lacking context and comparisons to existing methods, and falls short of demonstrating the technique’s generality.

Public review:

This work provides a new general tool for measuring rapid sequences of patterns in neural activity. Such sequences have been studied in the activity of cells in rodent hippocampus for decades (termed replay). They are suggested substrates for a number of important cognitive functions including memory consolidation, mental simulation and planning. Recently it has become possible to detect such sequences in humans using MEG and fMRI. This paper provides a modelling and inference framework for detecting such sequences from all types of data using the same approach. It should therefore allow replay to be compared more directly across species. It is also a more general technique than those typically used in rodents, so it may allow rigorous replay measurements in situations that are not currently possible.


Placing sequence analysis into a General linear modelling framework enables a powerful set of established tools for hypothesis testing and inference to be brought to bear on the analysis of sequences. Hypotheses about sequences can be formally expressed as regressors in a linear model. Standard tools can then be used for parametric or non-parametric inference. It allows formal approaches to address issues such as serial autocorrelations in the data, to ensure that inference is unbiased.

Because the regressors are built out of transitions from a graph, the technique is in principle amenable to measuring sequences in any situation where experiences can be expressed as a graph. This contrasts with other approaches that, for example, can only be used for rodents running along a linear track, but not in a maze or an open 2D environment. This generality in principle allows replay measurements in situations where other published techniques do not, and allows the same tools to be used to describe replay in many different situations, potentially allowing comparisons across different situations.

The technique in principle works in the same way across many different types of data giving the potential to compare more directly replay across different species where very different types of data has been acquired.


Although the paper does have strengths in principle, the weaknesses of the paper are that these strengths are not directly demonstrated. That is, insufficient analyses are performed to fully support the key claims in the manuscript by the data presented. In particular:

The authors imply the current method is superior to other methods on various different dimensions but there is very little actual comparison with other methods to substantiate this claim, particularly for sequences of more than two states which have been extensively used in the rodent replay literature (see Tingley and Peyrache, Proc Royal Soc B 2020 for a recent review of the rodent methods; different shuffling procedures are applied to identify sequenceness, see e.g., Farooq et al. Neuron 2019 and Foster, Ann Rev Neurosci 2017).

The authors claim that the method is more general than other methods on various different dimensions (ability to apply to different kinds of graphs; ability to apply to different species), but again the data supporting this claim is sparse. The method is not applied rigorously to graphs with different structure or from different species. Instead, it is applied to two MEG experiments with the same structure (linear-4 elements), and there is a small section that is very hard to understand where it is applied to rodent ePhys data. It is therefore very difficult to assess these claims.

The inference part of the work is potentially very valuable because this is an area that has been well studied in GLM/multiple regression type problems. However, the authors limit themselves to asking “first-order” sequence questions (i.e., whether observed sequenceness is different from random) when key questions, including whether or not there is evidence of replay, are actually “second-order” questions because they require a comparison of sequences across two conditions (e.g., pre-task and post-task; I’m borrowing this terminology from van der Meer et al. Proc Royal Soc B 2020). The authors do not address how to make this kind of comparison using their method.