Add bary/heliocentric radial velocity correction#5752
Conversation
|
Hmm, a curious failure mode to |
|
Haven't looked in much detail yet, but it looks like the inputs to |
|
Just to add back links: issue asking for barycentric correction: #3544; @StuartLittlefair's sample code: https://gist.github.com/StuartLittlefair/5aaf476c5d7b52d20aa9544cfaa936a1 (I'll try to look at the actual code here soon...) |
astropy/coordinates/velocities.py
Outdated
| gtarg = target.transform_to(GCRS(obstime=t, obsgeoloc=gcrs_p)) | ||
| targxyz = gtarg.represent_as(UnitSphericalRepresentation).to_cartesian().xyz | ||
|
|
||
| res = matrix_product(vsuntarg_cartrepr.xyz, targxyz) |
There was a problem hiding this comment.
OK, what you really want here is the regular dot product, i.e., np.inner. Annoyingly, that function is not quantity-proof.
BUT we just changed representations to do this for us, so just do
gtarg /= gtarg.norm()
res = gtarg.dot(vsuntarg_cartrepr)
There was a problem hiding this comment.
@mhvk is almost right. Since you've already normalised gtarg through the conversion to a UnitSphericalRepresentation and back, you want:
targcart = gtarg.represent_as(UnitSphericalRepresentation).to_cartesian()
res = targcart.dot(vsuntarg_cartrepr).to(KPS)There was a problem hiding this comment.
In my suggestion I wrote gtarg on purpose, i.e., I assumed the whole line targxyz would be removed (as it is unnecessarily slow compared to just dividing by the norm).
There was a problem hiding this comment.
I went ahead and profiled it, and, quite surprisngly, the UnitSphericalRepresentation way is faster:
>>> %%timeit
>>> targcart = gtarg.represent_as(UnitSphericalRepresentation).to_cartesian()
>>>res = targcart.dot(vcorr_cart)
497 µs ± 14.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
and
>>> %%timeit
>>> gtargd = gtarg.data
>>> gtargd /= gtargd.norm()
>>> res2 = gtargd.dot(vcorr_cart)
891 µs ± 28.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
That surprises me, but there it is... so will go that route.
There was a problem hiding this comment.
OK, real data beats everything! I also think the route is also clearer than the original - it still avoids the expensive .xyz. Note that you can remove the .to_cartesian() -- .dot will do that for you.
But of course need to understand: represent_as(USph) is fast - just a new class initialization (this can become faster still...). Then, to_cartesian is slightly faster than it would have been before, since no multiplication with a distance (of 1.). While for my original suggestion there is the division by the distance (quite fast, but needless), a similar reinitialization, and then a slightly more expensive conversion to cartesian. My timings are not as different, but still the route via unitspherical is clearly better:
g = SphericalRepresentation(np.arange(0,100)*u.deg, np.arange(-45., 55.)*u.deg, np.arange(100,200)*u.kpc)
v = CartesianRepresentation(5., 5., 5., unit=u.km/u.s)
%timeit (g/g.norm()).dot(v)
# 1000 loops, best of 3: 1.69 ms per loop
%timeit g.represent_as(UnitSphericalRepresentation).dot(v)
# 1000 loops, best of 3: 1.49 ms per loop
%timeit g.represent_as(UnitSphericalRepresentation).to_cartesian().dot(v)
# 1000 loops, best of 3: 1.49 ms per loop
|
Hi @eteq, I think it could be a good idea to implement this code now and worry about integration with a framework later. I'll add some inline style comments to the code, but I have a larger concern about precision. If you read this paper - and in particular pp 842, you'll see that the approach we're taking here of adding and subtracting velocities, cannot be more accurate than a few m/s. Should we be aiming for better than this? Without cm/s precision, the exoplanet folk won't be interested in this, whereas the time corrections are good enough for all but pulsar astronomers. Given the exoplanet community is possibly the largest user base for this code, I'd argue we should be implementing a more precise algorithm. On the subject of testing precision - the algorithm in the paper above has an online calculator which could be used to generate barycentric corrections for testing - http://astroutils.astronomy.ohio-state.edu/exofast/barycorr.html |
astropy/coordinates/velocities.py
Outdated
| gcrs_p, gcrs_v = loc.get_gcrs_posvel(t) | ||
|
|
||
| vsuntargxyz = vsunearth.xyz + gcrs_v.xyz | ||
| return CartesianRepresentation(vsuntargxyz.to(KPS)) |
There was a problem hiding this comment.
This isn't needed any more, you should be able to do:
return vsunearth + gcrs_vunless you really want it in km/s at this point
astropy/coordinates/velocities.py
Outdated
| vsuntarg_cartrepr = helio_vector(t, loc) | ||
| gcrs_p, _ = loc.get_gcrs_posvel(t) | ||
|
|
||
| gtarg = target.transform_to(GCRS(obstime=t, obsgeoloc=gcrs_p)) |
There was a problem hiding this comment.
For some reason I find this confusing, but shouldn't you be using the ICRS co-ordinate of the target here, and finding the dot product with the velocity vector? It will only make a ~1 m/s difference but it will be important if we shoot for a more precise algorithm
There was a problem hiding this comment.
Hmm... I don't think that's right. It's sort of in the definition of the correction, but the way I think about barycentric correction is: "the velocity vector of the earth relative to the barycenter, projected along the line-of-sight from the observer to the target". If I used the ICRS coordinate instead of the GCRS coordinate, that would be the line-of-sight from the barycenter to the target. For anything beyond the solar system this is a tiny effect... but it is still what I would say is the correction.
Should probably be explicit about this in the docs, though.
There was a problem hiding this comment.
I think @eteq is right: consider a not too-distance star that is exactly at the ecliptic pole, and which has a proper motion. As the earth orbits the sun, this proper motion would induce a radial-velocity modulation, because as seen from the geocenter, the position of the star oscillates around the ecliptic pole. But in ICRS, the star doesn't move.
astropy/coordinates/velocities.py
Outdated
| gtarg = target.transform_to(GCRS(obstime=t, obsgeoloc=gcrs_p)) | ||
| targxyz = gtarg.represent_as(UnitSphericalRepresentation).to_cartesian().xyz | ||
|
|
||
| res = matrix_product(vsuntarg_cartrepr.xyz, targxyz) |
There was a problem hiding this comment.
@mhvk is almost right. Since you've already normalised gtarg through the conversion to a UnitSphericalRepresentation and back, you want:
targcart = gtarg.represent_as(UnitSphericalRepresentation).to_cartesian()
res = targcart.dot(vsuntarg_cartrepr).to(KPS)
astropy/coordinates/velocities.py
Outdated
| return CartesianRepresentation(vsuntargxyz.to(KPS)) | ||
|
|
||
|
|
||
| def helio_corr(t, loc, target): |
There was a problem hiding this comment.
In my gist I used the signature (Time, SkyCoord, EarthLocation=None), to be consistent with how we implemented time corrections. Again, for consistency, if the EarthLocation is None, we try and get the location from the Time object.
There was a problem hiding this comment.
I'm actually really against any location information from the Time object. The whole point of having EarthLocation existing is to get around the awkwardness there, and its more natural anyway for something in coordinates. But the rest sounds fine!
There was a problem hiding this comment.
We had a long discussion for the barycentric time corrections about where these things should live. For velocity corrections, to me SkyCoord is more logical, in which the signature would become (SkyCoord, Time, Location) - with defaults of sc.obstime and sc.location (and, if a time is passed in, a secondary default of time.location).
That said, I think it would be fine to have a short-cut on Time (and similarly a short-cut for time corrections on SkyCoord).
There was a problem hiding this comment.
If we do add it to Time as well, I would stick to @StuartLittlefair's signature for consistency with light_travel_time - it seems logical to look for defaults first on self.
|
On the implementation: I would suggest initially to follow the precedent of the barycentric time corrections, and make the public API be a method rather than a stand-alone version. The method is probably most logical on where |
|
Longer term, I guess we'd want |
|
@StuartLittlefair - Looking at the reference you cited, it would seem what we need to provide is just the velocity correction, but we have to make clear in the documentation that one applies this as (and we'll need to implement that ourselves later...) |
|
Yes, that's true. However, I think we are still missing important (i.e m/s) physics. There's a python interface to the online calculator I linked to earlier. If you install that you can see that neither my gist, nor @eteq 's code agrees with the results of BARYCORR to a few m/s. This gist shows an example... |
|
@eteq - changed the milestone to 3.0 for now -- I think we have enough on our plate without this (and if you get the velocities right, this comes for free with the proper coordinate transform...). |
|
Note that, while in principal this functionality is now supported (with a lot more) in #6226 and the PR's that it are derived from, the finite difference calculation in that PR has some numerical stability issues for some relevant use cases. So I think the best thing to do is include this PR (updated with the comments, of course), with a note in the docs that it may at some point be adjusted to use the full velocity framework if the numerical stability problems can be solved. Will update this PR ASAP with that plan in mind. |
|
Great @eteq - perhaps add a note in the docs that this PR doesn't yet provide m/s precision and link to the barycorr Python module? Otherwise there's a risk we'll get lynched by some exoplanet people... |
|
I'm still rather uncomfortable with the API here, with a new module. Since this is rather simple, I'd much rather have a method on More generally, I think with the velocities we have more than enough on our plate, and I prefer to think about this better -- another option is to simply document this as part of |
|
I'm about to try to re-work this with the implementation comments above in mind, but just a couple thoughts on the bigger picture from @StuartLittlefair:
|
|
@mhvk - I'm receptive to the idea of it being in a method, as you suggest... the main concern, though, is that it might lead to a lot more confusion with the velocities work @adrn and I are trying to finish because there'd be two ways to do basically the same thing. A function, meanwhile, is much easier to cast as sort of a "temporary" workaround or an alternate approach (even if we support it indefinitely by having it eventually be based on the properly-done coordinates stuff). People will be much more likely to see it as a distinct implementation. |
|
With the commits I just pushed up I think I addressed all the comments. I did not add an I think this is very close now (assuming tests pass), but two remaining unresolved items:
I think @adrn and @mhvk might want to respond to those, so won't merge now. But this is very close so a clear candidate for backporting later this week, and hence I won't push the milestone. |
|
On disallowing Other than elegance, is there any real issue with having a clear hierarchy, e.g On the second restriction, I can only think of very contrived use cases where you would want to use a different |
|
@StuartLittlefair's point I think is valid: typical use in my "ICRS coordinate case" would be to calculate both barycentric corrections for the same |
|
OK, I updated this so that it will use I'm agains the idea of allowing any combination (e.g. |
mhvk
left a comment
There was a problem hiding this comment.
This now looks all OK to me. Missing though, is documentation.
|
@mhvk - good point re: docs, but my plan was to include this in the section that is to be added #6219 - so that would be easiest post-merge, and possibly in between the beta and final release. We should create an issue accounting for that once this is merged and make sure its milestoned 2.0. (Although there's a real test failure in the docstring which I'll fix shortly) |
|
I'm fine with adding this to the (coming) velocity documentation, but yes, an issue milestoned 2.0 is required! |
|
Same, fine to add later, as long as there is a 2.0 issue for it. |
[docs only]
|
I added a note about adding the documentation for this in #6260. Assuming the tests pass, I think this is now ready! |
|
Apologies. Thought I'd replied - looks fine, merge away! |
|
I'm going to go ahead and merge this, but @StuartLittlefair, if I was misunderstanding your intent above, feel free to raise it as a separate issue and/or a PR of your own. If we can get to that by next Monday we can still pull it into 2.0 as that would be a "bug" in my interpretation of your comment 😉 |
|
@eteq - thanks for merging this! However, I noticed the following issue with this pull request:
Would it be possible to fix this? Thanks! If you believe the above to be incorrect (which I - @astrobot - very much doubt) you can ping @astrofrog |
|
Whoops! Forgot the changelog... I think I'll just backport this to v2.0.0 now then and fix it directly in master to avoid confusion... |
Add bary/heliocentric radial velocity correction
|
(commits listed above are for the changelog in master, the backport of this PR to v2.0.x, and the backport of the changelog commit, respectively) |
This PR implements functions for computing heliocentric velocities. The main practically useful bit is the
helio_corrfunction. That computes the correction required to convert an observed radial velocity to a heliocentric velocity. This is a bread-and-butter calculation for basically all astronomical spectroscopic analysis. The neat thing is that the implementation here is pretty trivial now that the SS ephemerides stuff is in, and can even use JPL ephemerides if needed.Now the question is: is this how we want to deal with velocities? Instead of doing these as piecemeal functions, this could be done as an actual transformation framework. That needs to be a separate discussion (I think there's already an issue about it?), but either way this code is fundamental. So I'm creating this PR now so that the code is here, and if we can get this figured out by 2.0 we put it in that framework, otherwise we merge this for 2.0 to make sure something is in there to do this fundamental activity.
One bonus bit to add if we go this route
cc @adrn @mhvk @astrofrog