Skip to content

[server] Expose HTTP service to plugins #12464

@kimjoar

Description

@kimjoar

API Availability

7.4

7.6-7.9

Questionable

  • Add response interceptor
  • Add pre-handler inceptor

Summary

The current http server in the new platform is purely exploration at this stage (e.g. only get is implemented so far). In addition to how we handle get, post, put and delete there are some things we need to think about when designing the http server.

First of all, this is not a "generic" http server — we wrap an existing http server to build something specific for Kibana. One of the reasons we wrap an existing http server is that we want to be able to upgrade the http server at any time, which means we need to control the apis. This also means we can expose a subset of functionality that fits with Kibana. We can also make Kibana-specific adjustments that simplifies (and clarifies) the normal patterns we see in plugins.

Below I've described some of the things I think we need to think about (and explore) in the new http server.

Request-scoped services

In current Kibana we use callWithRequest often. This means we pass around request to many places. To alleviate this we have started to "request-scope" certain services, e.g. getUiSettingsService which depends on getSavedObjectsClient, which depends on the request to scope callWithRequest. This simplifies getting a savedObjectsClient, but it also adds some problems:

  • Everything we put on request becomes available everywhere. This is precisely what we're trying to avoid with the new plugin system design, where we are very specific about what you receive when you depend on another plugin. I think it's essential that transitive dependencies don't provide any values to a plugin, only direct dependencies can expose values.
  • Putting things on the request makes it more difficult to type.

I think we need to explore ideas that don't involve adding function on the request like this. It feels like a leaky abstraction.

I don't have a clear api in mind here, so I think we need to explore ideas. Currently in the new-platform branch there is an onRequest method that can be specified on a plugin's router. It's "pure exploration" code at the current time, so don't treat it as more than that. I'd be happy to see that go away for a better/different abstraction.

I think the best way to solve this problem is to find out how we want to handle #12442, as it seems like callWithRequest is the only reason we scope services today.

An example of how to avoid using request-scoped services is in a temporary branch, here made into a PR only for ease of reviewing. #14482

Validate fields

In the current PoC code we've implemented validating url params, query params and request body, which looks like this:

router.get(
  {
    path: '/:type',
    validate: {
      params: object({
        type: string()
      }),
      query: object({
        page: maybe(number({ min: 1 })),
        per_page: maybe(number({ min: 1 }))
      })
    }
  },
  async (req, res) => {
    const { params, query } = req;

    const savedObjects = await savedObjectsService.find({
      type: params.type,
      page: query.page,
      perPage: query.per_page
    });

    return res.ok(savedObjects);
  }
);

When adding validation, all fields must validate (aka the entire objects must validate, not just the specified keys). There's a couple nice values from doing this validation:

  • The expected format is clearly documented (and guaranteed to stay up-to-date).
  • We can (automatically) create TS types for the data, so params and query above will be typed.

In the current implementation we do not inject schema, but rely on it being injected further up. I think we should change this to inject schema directly into validate instead, so it could look like this:

router.get(
  {
    path: '/:type',
    validate: ({ object, string, maybe, number }) => ({
      params: object({
        type: string()
      }),
      query: object({
        page: maybe(number({ min: 1 })),
        per_page: maybe(number({ min: 1 }))
      })
    })
  },
  async (req, res) => {
    const { params, query } = req;

    const savedObjects = await savedObjectsService.find({
      type: params.type,
      page: query.page,
      perPage: query.per_page
    });

    return res.ok(savedObjects);
  }
);

Currently the new platform can perform validation on query params, url params and the body. Should we explore doing this for headers too? We can't validate all headers, of course, but maybe we can do it just for the specific headers you need?

Are there use-cases where this wouldn't work?

Not provide the full request

In current Kibana the handler for each request receives the full hapi request and response objects. I think we should explore not giving the handler the full request object. Instead of giving request handlers the full request, I think we should ONLY give it the validated fields (described above).

Why go this route? It ensures we're strict about input, treat failures on the input in the same way across Kibana endpoints, makes us depend less on these objects and all their features (e.g. mutating them and passing them down the stack), and it makes it easier for us to perform major changes to the underlying request and response objects (e.g. upgrade majors) without it impacting the plugin apis.

In theory we could give access to the underlying request object through an "escape hatch", e.g.

router.get(
  {
    path: '/:type',
    validate: ({ object, string }) => ({
      params: object({
        type: string()
      })
    })
  },
  async (req, res) => {
    const rawRequest = req.unstable_getRawRequest();
    // ...
  }
);

Escape hatch

Let's explore this "escape hatch" idea. With the new platform we want to create a more "stable plugin contract" and ideally "not expose internals or dependencies to plugins". However, we sometimes have plugins that need very specific access to apis, which doesn't really fit "most plugins". Two examples are Security and Reporting.

Instead of building stable apis for all use-cases, I think we should consider having an "escape hatch" that gives you access to the underlying library/framework objects. A good example of this is the request example from above:

const rawRequest = req.unstable_getRawRequest();

Using unstable_ in the name indicates that this is not a part of the "stable Kibana api", and that it can change at any time. We could even do a major upgrade in a patch release if we wanted to. It's up to plugin authors to keep up-to-date if they rely on these apis.

But the nice thing is that it allows us a way of not solving for all use-cases immediately.

I don't think this is something that should be used often, but it definitely feels like there are use-cases for this kind of approach. You can basically say we handle 95% of use-cases with the stable apis, but for special cases (e.g. Security) you might sometimes need to get the raw object instead.

Middleware, request filter, response filter

For the new platform we need to decide how we can run things before and after a request. In the current platform we rely on both the hapi lifecycle for this, and being able to specify pre on a route.

These are all the things that rely on the hapi lifecycle in Kibana today:

We also rely on pre to get the IndexPatternsService and the SavedObjectsClient, in addition to validating proxy routes for Console and for the basepath proxy.

If we find a solution for request-scoped services as described above, there are so few other things that depend on this right now, that I think we can rely on an "escape hatch" for this stuff. That way we can postpone the api design of this until we have more relevant use-cases.

This escape hatch needs to be exposed, so Security can rely on it. (TODO: Explore ideas of what this could look like)

(I'm definitely open to exploring ideas here too, but it feels like it's not the highest priority task we have for the http server.)

Format on error responses

(TODO: describe)

Is this defined in Kibana today? If so, can/should we use the same format in the new platform?

It feels like we should explore just keep using Boom to signal errors in the app, but we might not want to make that dep available to plugins? Hm. Maybe we need to create our own KibanaError class with static helpers like notFound, forbidden, badRequest etc.

How to fail requests

(TODO: describe. Code fails somewhere far down, what does the flow look like?)

Serve static files

(TODO: describe)

Plugins must be able to expose static files. This should likely just be some predefined folder, with no option to configure the file path. (TODO: What does the current plugin system do here?)

(TODO: How does this work with webpack et al when plugins are built on their own? Do they need to know which path they are being served from?)

Metadata

Metadata

Assignees

Labels

Feature:Legacy RemovalIssues related to removing legacy KibanaFeature:New PlatformTeam:CorePlatform Core services: plugins, logging, config, saved objects, http, ES client, i18n, etc t//enhancementNew value added to drive a business result

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions