Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some suggested modification to notebook_review_process.md #55

Open
eteq opened this issue Oct 25, 2024 · 1 comment
Open

Some suggested modification to notebook_review_process.md #55

eteq opened this issue Oct 25, 2024 · 1 comment
Labels
documentation Improvements or additions to documentation

Comments

@eteq
Copy link

eteq commented Oct 25, 2024

@jrhastro said @jkrick was interested in feedback I have on the notebook review process based on our experience with a similar process at STScI, so here I'm offering some suggestions on that basis. All are to be taken entirely as suggestions to be ignored or addressed at your leisure! Big picture I think this is good as it stands.

Note I'm writing this based on https://github.com/nasa-fornax/fornax-documentation/blob/62db23cd887ccd310d89cf8585632e849081da43/notebook_review_process.md just in case it changes between now and when someone's reading this.

  • The "Who should participate in these reviews?" item made me a little bit concerned: while it may not be for this documentation per se, at STScI we definitely need to have some structure around which particular teams/people with specific expertise might be best suited for reviewing certain notebooks. So it might be good to be a bit more concrete about who's generally expected to review which general areas. That of course doesn't need to be decided now or even necessarily live on github, but there it is anyway.
  • Note the style guide has some items that might be considered "science review" - in particular things like the "learning goals" and related ideas. Right now it only appears in tech review, so it might be worth including it in both (since I suspect plenty of science reviewers will only read that section and not the tech, as well as vice versa).
  • I'm not sure it makes sense to require the science review to check whether all 3 archives are included. I can think of plenty of notebooks that make sense to live in fornax that don't necessarily use all the archives. It's certainly important that at a top-level we try to balance this, but at STScI we intentionally separated that role to someone who is tasked with maintaining the top-level "is this good for our users and stakeholders" sort of tasks. That might in practice be most of the same people who do science reviews, but at least sometimes it's useful to pull someone in for one or two science reviews that isn't necessarily going to understand the big picture.
  • "Does the notebook run end-to-end, out of the box?" - I would suggest changing this to something like "has the CI run correctly? If not, help the author fix it." - that is, in practice it's quite error-prone for the technical reviewer to actually check the notebook themselves as opposed to letting CI services answer this question.
  • "Is the code parallelized where possible?" - this is debatable, but I'm not sure if we want this is an explicit goal. In my experience, particularly in Python, parallelization adds significant complexity to some problems, and that detracts from the reader understanding the core science logic. So it might be better to say something like "If it's necessary for the science case, is the code parallelized" or similar.

If you like, I can do some of the above as a PR to the markdown file myself, but it's not clear to me if that's the process you prefer vs having me give feedback and @jkrick focusing on making the actual changes.

@jkrick
Copy link
Collaborator

jkrick commented Oct 28, 2024

Thanks for the review @eteq I would indeed prefer to have the above comments in the form of suggestions as a PR, you can tag me as a reviewer.

So it might be good to be a bit more concrete about who's generally expected to review which general areas.

It would be great to have your more concrete suggestions. Mine was just that any one scientist on Fornax would review the code for the below Science Review Checklist and any one technical person on Fornax would review code for the below Tech Review Checklist. But mostly I want it to be clear that, in my opinion, not every scientist needs to review and not every tech person needs to review. I think we need buy in all around on this before getting started.

I'm not sure it makes sense to require the science review to check whether all 3 archives are included.

This was originally explicitly part of the requirements for fornax-demo-notebooks, we put the phrase in: 'if not, is that justified' with the full recognition that going forward there will be notebooks that will not satisfy this requirement. Happy to see suggestions for re-phrasing of this.

"Does the notebook run end-to-end, out of the box?" - I would suggest changing this to something like "has the CI run correctly? If not, help the author fix it."

good point, I like this suggestion.

So it might be better to say something like "If it's necessary for the science case, is the code parallelized" or similar.

I also like this suggestion.

@jkrick jkrick added the documentation Improvements or additions to documentation label Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants