Threos is designed for embedded systems, therefore it requires small amount of memory and other resource. Threos is built on a micro kernel: only the low-level, critical services are implemented in kernel space. These services are: scheduling, tasking, interrupt handling, memory management (if available), and some synchronization objects.
- Task: an execution environment, and owner of various resources, such as threads and memory objects; separated from other tasks (the concept is similar to the process on Linux or on Windows)
- Thread: schedulable entity, contains a context, and may have exception contexts (very similar to thread on Windows)
- Memo: memory object (raw memory allocation, rarely useful without views)
- View: mapped memo, the memory is accessable via views
The task is a collection of various kernel objects. The owner of almost all kernel objects is a task. The task owns threads, that can be scheduled, and executes the task's job. Every thread needs memory to able to run: for the code and for the stack.
The thread is a kernel object that have context (additionally may have exception nodes or contexts), and can be scheduled. During the exection of a thread, it can create new kernel objects, can delete kernel object (if the required permission is granted), and can interact with synchronization objects (signaling and waiting). A thread can cause exceptions by just reading memory (page faults), but may cause fatal (non-recoverable) exceptions that lead the thread (or the task, depending on the policy) to be killed or zombified.
A thread needs memory to actually run: most important is the code that the thread executes, and the second important is the stack where the thread can store data. Two major cases are distinguished here:
- Systems with memory management: Threads can execute only mapped code, so the code shall be loaded into the memory somehow. Common scenarios: the entire executable is loaded into the memory, then the thread is created; or an empty shell is created, bound to the code to load, let the thread to run, and the loader will fetch that is needed.
- Systems without memory management: Since no virtual memory exists, and impossible to map (regular) memories to different locations, threads can basically execute any memory. Furthermore, on this kind of systems support execute from ROM, threads usually just execute the code from the ROM instead of load them. This requires special linking, but it can be much faster, and the code is protected from altering (intentional and unintentional as well).
Groups or task groups is a set of tasks. A group has a session ID (specified during the creation), and may have a group leader (or session leader). Depending on the properties of a group, it can destroy itself in some conditions (for instance, if the leader dies, it can kill the whole group). Permissions can be shared within the group. Tasks must be created into a group.
Related system calls: CreateGroup.
The memory layout was designed to be easily portable between systems with memory management and systems without memory management. Thus, the operation system intended to use one global address space instead of one space per task. Porting a system with one global address space is more simple than a system with many address spaces. In fact, on systems without memory management, the view will be a very thin "layer" in the kernel. But it still exists to keep the code source compatible with the system with memory management.
The memory object reserves memory, views can be made on it, and the reserved memory will be allocated via page faults (lazy memory allocation). A view can map any page aligned portion of the reserved memory, thus a view can map the whole memory object, another can map the first n pages, another can map pages in the middle, etc. The view cannot be outside of the reserved memory (this is calculated by the view page offset and the view size, and checked against the number of reserved pages of the memory object).
Shared views are regular views, but their virtual address can be reused in other tasks. Critical requirement: shared views must be made from the same memory object with the same page offset. Therefore, the same memory region can be address from different tasks, and the virtual addresses are shared across the tasks. This will give a nice way to share large amount of data between tasks.
Systems without memory management have the same layers, as mentioned earlier, but for those systems, the views will be quite a transparent layer. Memory objects here not just reserve the memory but actually allocate it, since these systems lack page fault exception (at least for this purpose).
Important: the physical and the virtual memories not necessary have the same size, in fact, the virtual is usually much larger than the physical.
See an example memory layout here.
The kernel supports the Semaphore synchronization object. A semaphore is basically a counter, but it is absolutely thread-safe and threads can wait on it. Threads can increase the counter without blocking (however, the counter cannot exceed its maximal value that was set during the creation). Threads can decrease the counter with or without blocking (depends on the current state of the semaphore and on the parameters of the system call). The counter can be zero, but cannot be negative.
The de facto standard way to communicate with other tasks/threads is the Message Port. Multiple Message Ports can be created for any task, and one Message Port can hold a few messages. Some kernel objects (threads, Message Pipes, Signal Messages, etc) are able to wait until a slot is freed up. One message consists of three register-sized integers (the C API uses uintptr_t for this): sender, message1, message2. Note, the sender is of type Handle (which is uintptr_t), but this does not mean it is always a valid handle. However, the sender part is meant to be a sender handle (another Message Port) in general.
The Signal Message is a special message that can be sent onto a Message Port, but with fixed parameters. This message can be sent multiple times, and the message parameters (and the destination Message Port) will be always the same. The signal message can be sent any time, the only requirement is that the destination Message Port must exist. Therefore, this is useful to send messages in ISRs.
Interrupt Service Routine
Tasks can handle interrupts using an Interrupt Service Routine, (abbrevated to ISR) which is executed in the address space of the task. A few restrictions apply to ISRs:
- Must return with the RetISR system call.
- Shall return as fast as possible. Heavy calculations shall done outside the interrupt.
- Can interact with synchronization objects without timeout; this means, can call the SendMsg only with MP_NOW as timeout. Similar applies to WaitSemaphore, but with the SEMA_NOW.
- Cannot cause page faults that indicates loader interaction. In order to avoid this situation, it is recommended to preload the code and the stack by touching it (read one byte from each page). Note that, the stack does not have to be touched, but touching it will increase performance.
Exceptions may arise during code execution, and these (EXHAN_FLAG_SYSTEM type) exceptions should be correctly handled. These exceptions are quite low-level excpetions, and vary by system. For example, accessing aligned data in the memory from unaligned memory may cause (depending on the machine) EXID_SYS_ALIGNMENT exception.
Tasks may receive user (EXHAN_FLAG_USER type) exceptions as well. The exception ID in user exceptions is not defined by the kernel. (Note: however, it is defined by the libc installed on the operating system.)
The page faults mentioned earlier are also materialize as exceptions. However, the exception handler can be different from memory object to memory object. The exception handler will be executed in the owner task of the memory object. This means, the code must be executable in the owner task (the exception handler not necessarily set in the owner task).
Related system calls: SetExceptionHandler.
Author: Aron Barath, 2017