Skip to content

feat(web): headless segmentation unit tests 🐵#7157

Merged
jahorton merged 19 commits intofeature-gesturesfrom
feat/web/segmentation-headless-tests
Oct 11, 2022
Merged

feat(web): headless segmentation unit tests 🐵#7157
jahorton merged 19 commits intofeature-gesturesfrom
feat/web/segmentation-headless-tests

Conversation

@jahorton
Copy link
Copy Markdown
Contributor

@jahorton jahorton commented Aug 30, 2022

Implements unit tests for the segmentation engine. Thanks to our design for the module, we can do this headlessly!

While it is also possible to do "integration"-style user tests - input events through to segmentation - it's probably better to start here first.


The "neatest" unit tests implemented here:

image

The names should be fairly descriptive; these feature recorded inputs designed to test certain aspects of the segmentation algorithm. The first four correspond to reasonable keyboard input gestures - even if we don't plan to support multi-segment flicks yet. The fifth... is there for a smidge of stress-testing. The first three are particularly 'clean' and allow for extra precision in the test assertions.

Many of the other unit tests exist to ensure that the "API" we wish to provide when publishing segments operates as expected. Those turned out to be super-useful in polishing things up, as I discovered a number of "rough edges" during development of those tests. I'm glad those bits were discovered now, while the "search space" for related bugs was (relatively) small!


Since the point of this is to implement automated testing, the new unit tests should speak for themselves. So...

@keymanapp-test-bot skip

When successful:

image

And a few more for good measure:

image

@jahorton jahorton added this to the A16S10 milestone Aug 30, 2022
@keymanapp-test-bot
Copy link
Copy Markdown

keymanapp-test-bot bot commented Aug 30, 2022

User Test Results

Test specification and instructions

User tests are not required


for(let recordingID of testRecordings) {
it(`${recordingID}.json`, function() {
this.timeout(2 * testconfig.timeouts.standard);
Copy link
Copy Markdown
Contributor Author

@jahorton jahorton Aug 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For longer-duration (as in, time-duration) pre-recorded tests, the new segmentation mechanics can lead to a number of mocked setTimeout calls. On my development machine, this can lead to a need for extra time to fully emulate the sequences. This usually happened with a pre-recorded test that lasted 11 seconds, though it had less movement than one of the others.

It may be possible to set a flag somewhere to disable segmentation for these tests, which would also dodge the cause.

@jahorton jahorton force-pushed the feat/web/segmentation-headless-tests branch from a83f95b to 5c93f51 Compare September 1, 2022 08:19
@jahorton jahorton marked this pull request as ready for review September 2, 2022 07:56
Copy link
Copy Markdown
Member

@mcdurdin mcdurdin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't really give much of an opinion on the validity of the tests... I've spent a couple of hours reviewing this and there's nothing glaringly obvious. Pretty concerned about the mathematical complexity of gestures and really wondering if there is a simpler path.

@jahorton
Copy link
Copy Markdown
Contributor Author

I can't really give much of an opinion on the validity of the tests... I've spent a couple of hours reviewing this and there's nothing glaringly obvious. Pretty concerned about the mathematical complexity of gestures and really wondering if there is a simpler path.

I've wondered as well. I've looked back at https://www.npmjs.com/package/@use-gesture/vanilla a few times, since it seems to be a pretty prominent gesture library... but if it actually is able to do what we need, the amount of time it'd take me to process the library and figure out how to do everything we want seems to be pretty high. It seems specially tuned for user-interactive UI elements & animations, rather than keyboard-oriented gestures - it's all about "move", "drag", "zoom", "scroll", and "pinch". The paradigm is different enough that it'd take a lot of code 'wrangling', as I haven't seen a simple way to 'chain' gestures while looking at its API. (Longpresses after a roaming-touch would be very tricky, let alone without 'chaining' them from component gestures... and even then, there's longpress -> subkey selection, which I don't see being modeled clearly from its paradigm either.)

I also found "zingtouch", but the demo doesn't appear to support multi-touch. Trying to start a second drag in the "swipe" demo instantly killed the first drag without appearing to model the second drag. That's not a great sign in its favor, at least not for our use case.

use-gesture, at least, doesn't die instantly when a second touch starts; it faithfully sticks with the initial touchpoint without so much as a hiccup.

@mcdurdin
Copy link
Copy Markdown
Member

I've wondered as well.

Glad to hear that you've been looking at other libraries. I wonder if some of their models of tackling gesture recognition may be helpful even if the libraries themselves don't meet our needs? Have you dug into the sources of any of them? (I'm just curious, really!)

@jahorton
Copy link
Copy Markdown
Contributor Author

jahorton commented Sep 16, 2022

.... I wonder if some of their models of tackling gesture recognition may be helpful even if the libraries themselves don't meet our needs? Have you dug into the sources of any of them? (I'm just curious, really!)

I hadn't, but it wasn't too hard to find use-gesture's core recognition engine code.

A few notes from a quick look:

  • Main source folder of interest: https://github.com/pmndrs/use-gesture/tree/main/packages/core/src/engines
  • They (naturally) expect to handle only TouchEvents - we'd have to map mouse events to them
    • ...as a fallback? There are parts that reference the PointerEvent type instead. (And I can interact with their demos via mouse, which corroborates this.)
    • PointerEvent requires Chrome 55 though, so relying solely on that is out for us, for now, b/c Android 5.0 uses older if not updated.
      • It would definitely simplify the event tracking system if we could use it, though.
  • There's literally no touchpath segmentation. Considering the gestures offered... this checks out. All the gesture types they offer are simple, single-motion (per touch-point, b/c pinch) gestures.
    • When you don't need to worry about one gesture 'turning into' another gesture, things are simpler.

When your paradigm doesn't expect touchpath segmentation... all the calculations automatically assume that they're part of one continuous motion. That assumption greatly simplifies everything for them, comparatively speaking. Were we to try adopting their code, we'd have to handle segmentation ourselves when and where needed and have gestures cancelling other gestures at a relatively high level, rather than handled by the gesture-recognition engine itself, which is what our design will be aiming to do.

All of that said, pinch may make a useful reference for if and when we decide to implement caret panning. I still haven't fully wrapped my head around handling of multi-touchpoint gesture types, so that'd definitely be a helpful reference.

@mcdurdin
Copy link
Copy Markdown
Member

gestures cancelling other gestures

Remind me which gestures need cancellation by other gestures in our model?

@jahorton
Copy link
Copy Markdown
Contributor Author

gestures cancelling other gestures

Remind me which gestures need cancellation by other gestures in our model?

  • Longpresses cancel flick processing, for one.
    • If a key's been held long enough, a flick-like motion to one of the displayed subkeys should not produce a flick gesture!
  • "Roaming touches" are cancelled once any other gesture type is recognized.
  • Once longpress is sufficiently long, the part caring about "length" of the press needs to drop, "lock in" the key that is longpressed, and then allow subkey selection based on that key.
    • Longpresses are multi-stage gestures.
  • If in multitap mode, any recognition of longpresses or flick motion should auto-cancel the ongoing multitap state.

To name a few off the top of my head.

@mcdurdin
Copy link
Copy Markdown
Member

Longpresses cancel flick processing, for one.

  • If a key's been held long enough, a flick-like motion to one of the displayed subkeys should not produce a flick gesture!

As soon as the longpress timer resolves, it puts us into "longpress" state -- the longpress subkey menu is visible. Flicks and any other gestures should be ignored while in longpress state -- all interactions are with the longpress menu. We return to base state once the user releases their finger.

  • "Roaming touches" are cancelled once any other gesture type is recognized.

I'd be comfortable with dropping roaming touches altogether.

  • Once longpress is sufficiently long, the part caring about "length" of the press needs to drop, "lock in" the key that is longpressed, and then allow subkey selection based on that key.

Is this the same as above?

  • Longpresses are multi-stage gestures.
  • If in multitap mode, any recognition of longpresses or flick motion should auto-cancel the ongoing multitap state.

I have a sense that we don't need a generic solution for resolving multi-stage gestures. If we instead can have simple tests at each point in the state machine to determine which other gestures will cancel the existing state, we may be able to simplify the entire model somewhat? It'd be good to review this together, perhaps next week? (I wish we could do this in-person, it's so much easier to whiteboard etc)

@jahorton
Copy link
Copy Markdown
Contributor Author

in the state machine

Reminder: without touchpath segmentation, we found it pretty difficult to develop a unified state machine - the thing that would merit use of the word "the" there. Individual, per-gesture state machines of some sort would likely be possible without segmentation, though.

I have a sense that we don't need a generic solution for resolving multi-stage gestures. If we instead can have simple tests at each point in the state machine to determine which other gestures will cancel the existing state, we may be able to simplify the entire model somewhat?

As a reminder, that sentiment is exactly what led to this - to tackling touchpath segmentation. (Especially when considering that + the notion of a singular state machine.) The resulting segments will greatly facilitate development of such "simple tests", and are that simplification for the rest of the model.

I'm not saying that there aren't other approaches we could find that wouldn't also provide simplification, though.


A few other notes:

  • If we should ever desire to handle multi-segment flicks, well, we'd need some sort of 'multi-segment' detection.

    I know our current plans call for only ever handling single-segment ones, true. It's just something to consider if we think this might ever end up on our roadmap.

  • Similarly re: swipe, should we ever want to tackle that for keyboards with paired lexical models, once we enable auto-correct.

@mcdurdin mcdurdin modified the milestones: A16S10, A16S11 Sep 17, 2022
@jahorton jahorton mentioned this pull request Sep 19, 2022
1 task
@mcdurdin mcdurdin modified the milestones: A16S11, A16S12 Oct 2, 2022
Base automatically changed from feat/web/segment-synthesis to feature-gestures October 11, 2022 01:10
@jahorton jahorton requested a review from sgschantz as a code owner October 11, 2022 01:10
@jahorton jahorton merged commit 32b8263 into feature-gestures Oct 11, 2022
@jahorton jahorton deleted the feat/web/segmentation-headless-tests branch October 11, 2022 01:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants