Skip to content

Treat camera projected primitives in 3D space more like 2D objects #1025

@Wumpf

Description

@Wumpf

Consider a camera projection (typically perspective, but orthographic shouldn't make a difference) in 3D space.
image
Today, the renderer renders all shapes that are part of the camera projection (there might not only be images but also lines, points or even complex meshes) just like anything else in the scene. I.e. the user is expected to set the respective projection matrix as world matrix for all these objects.

This works nicely for the most part but there are are some things we need to fix (the actual tasks of this issue!):

  • any "viewspace generated geometry" needs to behave like it is being looked with the camera in question. Points & lines which today follow along the Eye/main camera should actually follow their camera, making them appear flat on the surface instead
  • auto sizes need to be treated like they live in the realm their camera
  • ...TODO other?

Open questions:

  • how should we communicate these things to the renderer in a somewhat unified manner? Depending on depth handling, not all objects need the same amount of knowledge
  • what with remaining depth? I.e. when we had 3D objects under the frustum that would have yielded a depth buffer?
    • Bunch of options, maybe should expose them?
      • flatten them onto the plane, i.e. ignore their depth for everything but depth offsetting (!), making it look like the same in a 2D space view
        • Primitive depth offsetting only goes that far for larger ranges
      • use the "virtual depth buffer depth", i.e. objects are distorted within the confines of the camera frustum
      • use view space depth, making it more behave like one would expect a depth map that someone placed into 3D\
  • Should there be clipping to the frustum? If so, probably as an option.
    • is there a way to achieve this without expensive discard instructions in the shader?
  • ...TODO other?

Note that the seemingly straight forward solution would be to have a literal 2d surface that we render in 3D space. There is a whole lot of drawbacks with this though as well:

  • 2D surface would have limited resolution, meaning we're loosing a lot of quality (there is no "good enough" resolution in this resampling problem!)
  • Need another render to texture pass which is something our renderer doesn't have either!
  • Very hard to handle "depth after projection", if at all we get a literal depth buffer

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions