Conversation
|
@samcoe this doesn't actually work, hence still a draft. I feel the code is in better shape and I've added tests. But while |
|
@heaths That is annoying behavior from This is definitely a bit of confusing behavior from the backend during the |
|
@samcoe no, exit code is still 0. Does GraphQL - or GitHub's implementation of it - make any stipulations for long-running operations (LROs)? I'm on the REST API Stewardship Board for Azure and we have some guidelines for REST APIs like so: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md#long-running-operations--jobs I realize this is more academic, but for any LROs like that having a status monitor can be useful for customers to monitor for when a resource is available: perhaps the original resource for, say, a PUT/PATCH; or, a separate resource for a POST. A quick search on "graphql long running operation" yields no specs, but some interesting ideas using either a status monitor or returning an interface / union. Perhaps something to consider. Again, though, I realize such a change would be breaking. Just food for thought, I guess. For the problem at hand, checking stdout (or stderr? I didn't look where it went) for that warning would be too fragile. For whatever reason fetch failed, maybe instead of a |
|
@heaths That is unfortunate that the exit code is still 0, I agree that parsing stdout to see if the repo cloned was empty to too brittle. Let me know how the investigation goes on using I am unaware of any stipulations for long-running operations that GitHub has for the GraphQL API. I also wouldn't be the best person to ask even if there was one. I can ask internally if you would like. It would be nice if there was one though. |
|
That'd be great if you want to have them reach out over email or Teams. I have a lot of experience in LROs and work with some others with even more. Not for GraphQL, mind you, but we could help with any REST discussions and some of that might translate to some ideas for GraphQL. |
|
@samcoe this is ready. I found that if I specify the branch explicitly, it will fail the clone if the branch is not fully set up. I tried several times against a real template repo and got it to fail a couple of times. The updated test reflects that. I capture stderr and only emit that if it's not the expected error, which shows in the test. I attempted a few different ways to "inject" stderr into the stubs but, without significant overhaul of the internal runner, was unsuccessful; however, with these changes I did confirm in a live test that I didn't see "fatal: Remote branch main not found in upstream origin" when a retry happened (set |
mislav
left a comment
There was a problem hiding this comment.
This looks great; thank you! Just one point about the change in functionality
|
GitHub Actions seem to be experiencing trouble. |
Fixes #7055