You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@jrhastro said @jkrick was interested in feedback I have on the notebook review process based on our experience with a similar process at STScI, so here I'm offering some suggestions on that basis. All are to be taken entirely as suggestions to be ignored or addressed at your leisure! Big picture I think this is good as it stands.
The "Who should participate in these reviews?" item made me a little bit concerned: while it may not be for this documentation per se, at STScI we definitely need to have some structure around which particular teams/people with specific expertise might be best suited for reviewing certain notebooks. So it might be good to be a bit more concrete about who's generally expected to review which general areas. That of course doesn't need to be decided now or even necessarily live on github, but there it is anyway.
Note the style guide has some items that might be considered "science review" - in particular things like the "learning goals" and related ideas. Right now it only appears in tech review, so it might be worth including it in both (since I suspect plenty of science reviewers will only read that section and not the tech, as well as vice versa).
I'm not sure it makes sense to require the science review to check whether all 3 archives are included. I can think of plenty of notebooks that make sense to live in fornax that don't necessarily use all the archives. It's certainly important that at a top-level we try to balance this, but at STScI we intentionally separated that role to someone who is tasked with maintaining the top-level "is this good for our users and stakeholders" sort of tasks. That might in practice be most of the same people who do science reviews, but at least sometimes it's useful to pull someone in for one or two science reviews that isn't necessarily going to understand the big picture.
"Does the notebook run end-to-end, out of the box?" - I would suggest changing this to something like "has the CI run correctly? If not, help the author fix it." - that is, in practice it's quite error-prone for the technical reviewer to actually check the notebook themselves as opposed to letting CI services answer this question.
"Is the code parallelized where possible?" - this is debatable, but I'm not sure if we want this is an explicit goal. In my experience, particularly in Python, parallelization adds significant complexity to some problems, and that detracts from the reader understanding the core science logic. So it might be better to say something like "If it's necessary for the science case, is the code parallelized" or similar.
If you like, I can do some of the above as a PR to the markdown file myself, but it's not clear to me if that's the process you prefer vs having me give feedback and @jkrick focusing on making the actual changes.
The text was updated successfully, but these errors were encountered:
Thanks for the review @eteq I would indeed prefer to have the above comments in the form of suggestions as a PR, you can tag me as a reviewer.
So it might be good to be a bit more concrete about who's generally expected to review which general areas.
It would be great to have your more concrete suggestions. Mine was just that any one scientist on Fornax would review the code for the below Science Review Checklist and any one technical person on Fornax would review code for the below Tech Review Checklist. But mostly I want it to be clear that, in my opinion, not every scientist needs to review and not every tech person needs to review. I think we need buy in all around on this before getting started.
I'm not sure it makes sense to require the science review to check whether all 3 archives are included.
This was originally explicitly part of the requirements for fornax-demo-notebooks, we put the phrase in: 'if not, is that justified' with the full recognition that going forward there will be notebooks that will not satisfy this requirement. Happy to see suggestions for re-phrasing of this.
"Does the notebook run end-to-end, out of the box?" - I would suggest changing this to something like "has the CI run correctly? If not, help the author fix it."
good point, I like this suggestion.
So it might be better to say something like "If it's necessary for the science case, is the code parallelized" or similar.
@jrhastro said @jkrick was interested in feedback I have on the notebook review process based on our experience with a similar process at STScI, so here I'm offering some suggestions on that basis. All are to be taken entirely as suggestions to be ignored or addressed at your leisure! Big picture I think this is good as it stands.
Note I'm writing this based on https://github.com/nasa-fornax/fornax-documentation/blob/62db23cd887ccd310d89cf8585632e849081da43/notebook_review_process.md just in case it changes between now and when someone's reading this.
If you like, I can do some of the above as a PR to the markdown file myself, but it's not clear to me if that's the process you prefer vs having me give feedback and @jkrick focusing on making the actual changes.
The text was updated successfully, but these errors were encountered: