-
Notifications
You must be signed in to change notification settings - Fork 2
feat: initial beer mcstas workflow #184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/ess/beer/conversions.py
Outdated
| def compute_tof_from_known_peaks( | ||
| da: DetectorData[RunType], graph: ElasticCoordTransformGraph | ||
| ) -> DetectorTofData[RunType]: | ||
| return da.transform_coords( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is badly named, since it computes dspacing etc in addition to tof. That's just how it works right now to get the demo notebook running without too much effort. I would like to connect to the regular powder diffraction workflow after this point.
src/ess/beer/conversions.py
Outdated
| 'tof': _tof_from_dhkl, | ||
| 'coarse_dhkl': _compute_d, | ||
| 'theta': lambda two_theta: two_theta / 2, | ||
| 'dhkl_list': lambda: dhkl_list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jl-wynen in chat you mentioned it's better to generate a new _compute_d function binding dhkl_list in its scope instead of having dhkl_list be an "intermediate" here and then use keep_intermediate=False. The plan is to do that eventually
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you do this now or open an issue to track it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The notebook in its current shape is mainly there to be able to display the current status of the implementation to Celine. When we know more it will be replace by a more useful tutorial notebook.
src/ess/beer/clustering.py
Outdated
| da.coords['coarse_d'] = ( | ||
| constants.h | ||
| / constants.m_n | ||
| * (da.coords['event_time_offset'] - da.coords['approximate_t0']) | ||
| / sc.sin(da.coords['two_theta'] / 2) | ||
| / da.coords['L0'] | ||
| / 2 | ||
| ).to(unit='angstrom') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit low-level, but since I noticed this: Can you use https://github.com/scipp/scippneutron/blob/5acb18bc675eba373fae94ea73012d599e1e1f49/src/scippneutron/conversion/tof.py#L61 instead of reimplementing it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we can
src/ess/beer/workflow.py
Outdated
| 0.5107, | ||
| 0.5107, | ||
| 0.5102, | ||
| 0.5057, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do those number come from and what sample do they apply to? And why are they hardcoded in the package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are peak positions for a particular test sample. They should not be hardcoded in the package
| "wf = BeerMcStasWorkflowKnownPeaks()\n", | ||
| "wf[Filename[SampleRun]] = mcstas_duplex(9)\n", | ||
| "da = wf.compute(DetectorTofData[SampleRun])\n", | ||
| "da.hist(dspacing=dspacing).plot()" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this data have a dspacing coord? It should be in tof, not dspacing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I agree, it should just have tof.
| "wf = BeerMcStasWorkflow()\n", | ||
| "wf[Filename[SampleRun]] = mcstas_duplex(9)\n", | ||
| "da = wf.compute(DetectorTofData[SampleRun])\n", | ||
| "da.bins.coords['dspacing'] = (sc.constants.h / sc.constants.m_n * da.bins.coords['tof'] / sc.sin(da.bins.coords['two_theta'] / 2) / da.bins.coords['Ltotal'] / 2).to(unit='angstrom')\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one doesn't have dspacing? Then it should be a different type.
docs/api-reference/index.md
Outdated
| powgen | ||
| ``` | ||
|
|
||
| ## ESSbeer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually don't use the 'ESS' prefix in these headings in other packages. Can you change this to 'Beer' and also above to 'Dream'?
And please move this above the SNS section. Beer is more important.
| max_distance_from_streak_line = mod_period / 3 | ||
| sin_theta_L = sc.sin(da.bins.coords['two_theta'] / 2) * da.bins.coords['Ltotal'] | ||
| t = da.bins.coords['event_time_offset'] | ||
| for _ in range(15): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make this a parameter.
Is there a way to check for convergence instead of blindly running 15 iterations? If we don't have any checks, how can we know whether the data is meaningful?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's an open question what the convergence check would be here.
I agree it would be good to have, but for now I think we should create an issue to remember it and leave it as follow-up work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making it a parameter is not a good idea. That gives the impression this is something the user should fiddle with. It's unlikely to help them and will probably just cause confusion and extra work for us.
If 15 is insufficient we should just increase it instead, but I haven't seen any indication that is the case.
src/ess/beer/conversions.py
Outdated
| 'tof': _tof_from_dhkl, | ||
| 'coarse_dhkl': _compute_d, | ||
| 'theta': lambda two_theta: two_theta / 2, | ||
| 'dhkl_list': lambda: dhkl_list, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you do this now or open an issue to track it?
Co-authored-by: Jan-Lukas Wynen <jan-lukas.wynen@ess.eu>
Co-authored-by: Jan-Lukas Wynen <jan-lukas.wynen@ess.eu>
Co-authored-by: Jan-Lukas Wynen <jan-lukas.wynen@ess.eu>
What do you mean? :) Do what now? |
That was a reply to #184 (comment) Don't know why it showed up as a separate comment as well. |
Aha, I see. It's done now anyway |
Fixes #180 #181
Initial data reduction workflow for BEER McStas data files.
I've tried to keep it pretty simple and flexible for now.
I'm not sure where this should be "connected" to the regular diffraction workflow, so for now it's a separate workflow.
For now this is a draft because the McStas data loader needs to load chopper information better to be able to properly handle different chopper "modes".
Comments and suggestions are welcome.