vroom notes

Migrate to duktape based infra. Reverse the order of communication. Clients must implement callbacks.

  • Init - called when the program is started. Must return a JS script which is evaled to set up the program.
  • load resource - server requests data from the client. For example "give me this image" and "require" for pulling libraries.

Components

  • Protocol thread - fetches resources from the app via http calls
  • Duktape threads - run an "app" sandboxed in a duktape environment
  • Main gl thread - makes real gl calls on behalf of duktape threads
  • Other module threads - openhmd, libinput, managing drm screen hotplugs etc

Protocol thread (resources)

The "protocol" module is a libcurl request queue running in a thread. It has an input and output queue. Requests for resources (images, source files and other content) are placed into the queue from duktape thread callbacks. Once complete the responses are placed onto the output queue and picked up by duktape threads. This allows async resource loading.

Duktape threads

Duktape threads are started in response to launching an app by requesting it's init JS source.

All gl calls from duktape threads are queued onto the main gl thread. There are two modes: blocking and non-blocking. In blocking mode the duktape thread waits for the response on the output queue from the main thread so it can return thread local gl IDs to the duktape context (a pipe with select?). Non-blocking requests happen for all gl calls made within the context of a frame render. Duktape "knows" what a frame render pass is because drawing is done from requestAnimationFrame. Every gl call that occurs inside that callback is used to assemble a rendering pipeline of gl calls specific to that duktape thread in the main gl thread. Once the callback is complete the duktape thread locks the main gl thread render pipeline for that client and replaces it with the new one.

This means gl IDs can not be created inside frame render code that are available outside that code because there is no opportunity to return an id from the... Perhaps all gl calls should be blocking to start with.

Main gl thread

The main gl thread is responsible for continuously drawing these render pipelines at the required frame rate. Also responsible for building and swapping the render pipelines.

This means building the rendering context is blocking and slow (gl.bufferData) but the render pass is non-blocking and faster. Also if duktape is slow doing a render frame the main gl thread can continue to render an old render pipeline before the swap, maintaining the frame rate.

Proposal

I believe the time is right to develop a GPU powered desktop that breaks from the traditional 2d paradigm. Wayland is awesome, but its core is a 2d pixel buffer. We are possibly in a VR Autumn but that doesn't mean we should accept a 2d future for all time.

I deliberately do not call this a VR desktop for a good reason. I believe a desk mounted monitor will always be a convenient way to view digital content. Donning a fully immersive VR helmet will never be the only way to consume digital content. However I do see a future where a VR (or AR) headset is a common household device.

Therefore a desktop for the future is one that seamlessly integrates desktop monitors, AR headsets or glasses, full VR immersive plus the full range of input devices - all simultaneously. There is no reason to prevent someone with a VR helmet from using a regular mouse, nor preventing someone using a desktop monitor from using a VR controller. It should be about allowing all possible combinations of user interaction.

This means full hot-plug ability for all input and output devices, with configurable behaviors for different combinations. This would allow users to define their preferred way of operating their machines and not locking in and stating for examlle "this is a VR desktop and you can only use it with a VR headset".

It also means moving towards the 3d mesh being a first-class citizen of the desktop. It means we should be able to develop fully 3d applications, regardless of viewing them on a desktop monitor or from within an immersive environment.

Up until fairly recently

There are a few major hurdles to this (shaders, vertex data). GPU shaders are an incredibly flexible way of

compositor issues