feat(web): headless segmentation unit tests 🐵#7157
feat(web): headless segmentation unit tests 🐵#7157jahorton merged 19 commits intofeature-gesturesfrom
Conversation
User Test ResultsTest specification and instructions User tests are not required |
85414ce to
f843e01
Compare
|
|
||
| for(let recordingID of testRecordings) { | ||
| it(`${recordingID}.json`, function() { | ||
| this.timeout(2 * testconfig.timeouts.standard); |
There was a problem hiding this comment.
For longer-duration (as in, time-duration) pre-recorded tests, the new segmentation mechanics can lead to a number of mocked setTimeout calls. On my development machine, this can lead to a need for extra time to fully emulate the sequences. This usually happened with a pre-recorded test that lasted 11 seconds, though it had less movement than one of the others.
It may be possible to set a flag somewhere to disable segmentation for these tests, which would also dodge the cause.
common/web/gesture-recognizer/src/test/auto/headless/segmentation.js
Outdated
Show resolved
Hide resolved
a83f95b to
5c93f51
Compare
mcdurdin
left a comment
There was a problem hiding this comment.
I can't really give much of an opinion on the validity of the tests... I've spent a couple of hours reviewing this and there's nothing glaringly obvious. Pretty concerned about the mathematical complexity of gestures and really wondering if there is a simpler path.
I've wondered as well. I've looked back at https://www.npmjs.com/package/@use-gesture/vanilla a few times, since it seems to be a pretty prominent gesture library... but if it actually is able to do what we need, the amount of time it'd take me to process the library and figure out how to do everything we want seems to be pretty high. It seems specially tuned for user-interactive UI elements & animations, rather than keyboard-oriented gestures - it's all about "move", "drag", "zoom", "scroll", and "pinch". The paradigm is different enough that it'd take a lot of code 'wrangling', as I haven't seen a simple way to 'chain' gestures while looking at its API. (Longpresses after a roaming-touch would be very tricky, let alone without 'chaining' them from component gestures... and even then, there's longpress -> subkey selection, which I don't see being modeled clearly from its paradigm either.) I also found "zingtouch", but the demo doesn't appear to support multi-touch. Trying to start a second drag in the "swipe" demo instantly killed the first drag without appearing to model the second drag. That's not a great sign in its favor, at least not for our use case.
|
Glad to hear that you've been looking at other libraries. I wonder if some of their models of tackling gesture recognition may be helpful even if the libraries themselves don't meet our needs? Have you dug into the sources of any of them? (I'm just curious, really!) |
I hadn't, but it wasn't too hard to find use-gesture's core recognition engine code. A few notes from a quick look:
When your paradigm doesn't expect touchpath segmentation... all the calculations automatically assume that they're part of one continuous motion. That assumption greatly simplifies everything for them, comparatively speaking. Were we to try adopting their code, we'd have to handle segmentation ourselves when and where needed and have gestures cancelling other gestures at a relatively high level, rather than handled by the gesture-recognition engine itself, which is what our design will be aiming to do. All of that said, pinch may make a useful reference for if and when we decide to implement caret panning. I still haven't fully wrapped my head around handling of multi-touchpoint gesture types, so that'd definitely be a helpful reference. |
Remind me which gestures need cancellation by other gestures in our model? |
To name a few off the top of my head. |
As soon as the longpress timer resolves, it puts us into "longpress" state -- the longpress subkey menu is visible. Flicks and any other gestures should be ignored while in longpress state -- all interactions are with the longpress menu. We return to base state once the user releases their finger.
I'd be comfortable with dropping roaming touches altogether.
Is this the same as above?
I have a sense that we don't need a generic solution for resolving multi-stage gestures. If we instead can have simple tests at each point in the state machine to determine which other gestures will cancel the existing state, we may be able to simplify the entire model somewhat? It'd be good to review this together, perhaps next week? (I wish we could do this in-person, it's so much easier to whiteboard etc) |
Reminder: without touchpath segmentation, we found it pretty difficult to develop a unified state machine - the thing that would merit use of the word "the" there. Individual, per-gesture state machines of some sort would likely be possible without segmentation, though.
As a reminder, that sentiment is exactly what led to this - to tackling touchpath segmentation. (Especially when considering that + the notion of a singular state machine.) The resulting segments will greatly facilitate development of such "simple tests", and are that simplification for the rest of the model. I'm not saying that there aren't other approaches we could find that wouldn't also provide simplification, though. A few other notes:
|
Implements unit tests for the segmentation engine. Thanks to our design for the module, we can do this headlessly!
While it is also possible to do "integration"-style user tests - input events through to segmentation - it's probably better to start here first.
The "neatest" unit tests implemented here:
The names should be fairly descriptive; these feature recorded inputs designed to test certain aspects of the segmentation algorithm. The first four correspond to reasonable keyboard input gestures - even if we don't plan to support multi-segment flicks yet. The fifth... is there for a smidge of stress-testing. The first three are particularly 'clean' and allow for extra precision in the test assertions.
Many of the other unit tests exist to ensure that the "API" we wish to provide when publishing segments operates as expected. Those turned out to be super-useful in polishing things up, as I discovered a number of "rough edges" during development of those tests. I'm glad those bits were discovered now, while the "search space" for related bugs was (relatively) small!
Since the point of this is to implement automated testing, the new unit tests should speak for themselves. So...
@keymanapp-test-bot skip
When successful:
And a few more for good measure: