OZ OS

Event driven. Pressing a key generates an event. Event queues store data to be consumed. There are no executables. Units of logic are functions. Functions can be written to respond to events. All functions live in memory and the whole image can be persisted to storage. Functions can also produce events.

The job of the kernel is to coordinate event producers and make sure that producers are given time to produce events for their consumers. Consumers are always run to their end and there are never event loops inside applications. The only event loop is the kernel, dispatching events to functions.

The only interrupt source is the keyboard (serial device) which dispatches key events to a queue and returns to the main kernel loop. Each queue has a producer and consumer attached.

Reading files

A function can produce an event to request a data stream to be setup from a file. The filesystem code responds to that event and creates a new data stream. An application function (somehow) is attached to that event stream.

The kernel event loop inspects all event queues. It knows that there is a producer attached to this file event queue and it calls the producer function. This function reads some bytes from the file and puts them on the queue, returning to the kernel. The kernel then calls the consumer function to pull bytes from the queue.

When the file has been completely read the producer detaches from the queue. The kernel knows that it need not call the producer function any more and calls the consumer function until the queue is empty. When the queue is empty it tears down the queue and calls a consumer teardown function to let it know that the file has ended.

  • Cycle through queue status table
  • Call producer function
  • Check queue empty
  • Call consumer function

All hardware interrupts use shadow register exchange for speed, and quickly write to a given queue while interupts are disabled. The kernel loop, producer and consumer functions can be interrupted most of the time. Interupts are disabled only when writing to or reading from a system queue that can be written by an interrupt (serial device). Perhaps easier to just always do it.

The kernel context is used for cycling through queues to find the next action required. There is a context switch right before a producer call, and a context switch right before a consumer call. Before entering a callback, all registers for the environment are restored. Why restore registers if we know a function is not interrupted to switch to a new function? Perhaps a context switch is simply restoring a stack pointer, and possibly one day doing a memory page swap.

When a callback needs to read or write to a queue it will call a kernel routine. This will disable interrupts and do a shadow register swap, using the shadow registers for queue manipulation. Then registers are swapped back, interrupts re-enabled and control returns to the callback function.

What happens when there is a write to a queue from a producer that is larger than the free space left in the queue? Ideally the callback should be paused and the kernel main loop allowed to proceed until a consumer can pull data from the queue and make space. In this case the register environment of the producer needs to be saved - probably on the callback environment stack.

Sets and chunks

Managing memory, in particular managing several variable length queues that can be added and removed dynamically. There are sets and chunks. Sets are lists of chunks or more sets. Sets and chunks can be up to 32k in size but the outer most set can not contain more than 32k of total size of the contents. There is a two byte header that stores 1 bit for the type, and 15 bits of size.

Appending to a set

The end of the set is its location plus its size. The data is written to the end of the set and the size is increased by the chunk or set being appended and. What about nested chunks? We must shift all data from the write point of the inner set to the end of the outermost set, along by the size of the thing being appended to the innermost set. There must always be an outermost set and its address must be passed to append() along with the inner set being appended.

coroutines

These are implemented as "ret" with the previous "yield" label on the stack. When a yield is encountered the next label is pushed on the coroutine stack. Then the stack of the yield-to destination is restored and a "ret" is done to pop the location off the destination stack. Prior to a yield the local state is stored on the coroutine stack. Coroutines are set up with the entry label pushed on the stack (after args are pushed).

A yield does:

  • Push local variables onto the stack of the yielding routine
  • Set the yield value in hl if applicable
  • Push the next instruction after the ret onto the stack of the yielding routine (probably a label)
  • Switch the stack pointer over to the destination routine
  • Do a "ret" over to the destination routine
  • The ret will pop the destination routine address from its stack

Yielding routines are called a bit differently. First the calling routine saves local variables on its own stack. Then it pushes the local return label onto its own stack. It then switches the stack pointer over to the callee routine. It pushes any routine arguments onto the callee routine stack and then pushes the routine entry label and does a "ret".

In the case of oz, the routine calling the yielding routine is always the kernel. This means the stack switching can be done by a kernel routine/syscall. The yielding routine is always a queue producer.

How could coroutines work in a more general sense? Something has to be aware of the location of routine stacks.