Most places use CONFIG_X86_STACK_PROTECTION, but there are some
places using CONFIG_HW_STACK_PROTECTION. So synchronize all
to use CONFIG_X86_STACK_PROTECTION instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.
Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In order to mitigate at runtime whether it booted on multiboot or EFI,
let's introduce a dedicated x86 cpu argument structure which holds the
type and the actual pointer delivered by the method (multiboot_info, or
efi_system_table)
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reverts commit 7d32e9f9a5.
We now allow the kernel to be linked virtually. This patch:
- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
in boot page tables to facilitate instruction pointer
transition, with logic to clean this up after completed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.
Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.
ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This was reporting the wrong page tables for supervisor
threads with KPTI enabled.
Analysis of existing use of this API revealed no problems
caused by this issue, but someone may trip over it eventually.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will be used by MSI multi-vector implementation to connect the irq
and the vector prior to allocation.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
We provide an option for low-memory systems to use a single set
of page tables for all threads. This is only supported if
KPTI and SMP are disabled. This configuration saves a considerable
amount of RAM, especially if multiple memory domains are used,
at a cost of context switching overhead.
Some caching techniques are used to reduce the amount of context
switch updates; the page tables aren't updated if switching to
a supervisor thread, and the page table configuration of the last
user thread switched in is cached.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- z_x86_userspace_enter() for both 32-bit and 64-bit now
call into C code to clear the stack buffer and set the
US bits in the page tables for the memory range.
- Page tables are now associated with memory domains,
instead of having separate page tables per thread.
A spinlock protects write access to these page tables,
and read/write access to the list of active page
tables.
- arch_mem_domain_init() implemented, allocating and
copying page tables from the boot page tables.
- struct arch_mem_domain defined for x86. It has
a page table link and also a list node for iterating
over them.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will be needed when we support memory un-mapping, or
the same user mode page tables on multiple CPUs. Neither
are implemented yet.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This function iterates over the thread's memory domain
and updates page tables based on it. We need to be holding
z_mem_domain_lock while this happens.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The boot code of x86_64 initializes the stack (if enabled)
with a hard-coded size for the ISR stack. However,
the stack being used does not have to be the ISR stack,
and can be any defined stacks. So pass in the actual size
of the stack so the stack can be initialized properly.
Fixes#21843
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.
We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.
CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.
Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In order to be possible to debug usermode threads need to be able
issue breakpoint and debug exceptions. To do this it is necessary to
set DPL bits to, at least, the same CPL level.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.
- Paging code now uses an array of paging level characteristics and
walks tables using for loops. This is opposed to having different
functions for every paging level and lots of #ifdefs. The code is
now more concise and adding new paging modes should be trivial.
- We now support 32-bit, PAE, and IA-32e page tables.
- The page tables created by gen_mmu.py are now installed at early
boot. There are no longer separate "flat" page tables. These tables
are mutable at any time.
- The x86_mmu code now has a private header. Many definitions that did
not need to be in public scope have been moved out of mmustructs.h
and either placed in the C file or in the private header.
- Improvements to dumping page table information, with the physical
mapping and flags all shown
- arch_mem_map() implemented
- x86 userspace/memory domain code ported to use the new
infrastructure.
- add logic for physical -> virtual instruction pointer transition,
including cleaning up identity mappings after this takes place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit renames the x86 Kconfig `CONFIG_{EAGER,LAZY}_FP_SHARING`
symbol to `CONFIG_{EAGER,LAZY}_FPU_SHARING`, in order to align with the
recent `CONFIG_FP_SHARING` to `CONFIG_FPU_SHARING` renaming.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This operation is formally defined as rounding down a potential
stack pointer value to meet CPU and ABI requirments.
This was previously defined ad-hoc as STACK_ROUND_DOWN().
A new architecture constant ARCH_STACK_PTR_ALIGN is added.
Z_STACK_PTR_ALIGN() is defined in terms of it. This used to
be inconsistently specified as STACK_ALIGN or STACK_PTR_ALIGN;
in the latter case, STACK_ALIGN meant something else, typically
a required alignment for the base of a stack buffer.
STACK_ROUND_UP() only used in practice by Risc-V, delete
elsewhere.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.
There is now a centralized declaration in kernel_internal.h.
On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.
The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.
The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use of the _current_cpu pointer cannot be done safely in a preemptible
context. If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.
Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).
Note that the resulting _current expression now requires locking and
is going to be somewhat slower. Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.
Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Implement a set of per-cpu trampoline stacks which all
interrupts and exceptions will initially land on, and also
as an intermediate stack for privilege changes as we need
some stack space to swap page tables.
Set up the special trampoline page which contains all the
trampoline stacks, TSS, and GDT. This page needs to be
present in the user page tables or interrupts don't work.
CPU exceptions, with KPTI turned on, are treated as interrupts
and not traps so that we have IRQs locked on exception entry.
Add some additional macros for defining IDT entries.
Add special handling of locore text/rodata sections when
creating user mode page tables on x86-64.
Restore qemu_x86_64 to use KPTI, and remove restrictions on
enabling user mode on x86-64.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- In early boot, enable the syscall instruction and set up
necessary MSRs
- Add a hook to update page tables on context switch
- Properly initialize thread based on whether it will
start in user or supervisor mode
- Add landing function for system calls to execute the
desired handler
- Implement arch_user_string_nlen()
- Implement logic for dropping a thread down to user mode
- Reserve per-CPU storage space for user and privilege
elevation stack pointers, necessary for handling syscalls
when no free registers are available
- Proper handling of gs register considerations when
transitioning privilege levels
Kernel page table isolation (KPTI) is not yet implemented.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We use a fixed value of 32 as the way interrupts/exceptions
are setup in x86_64's locore.S do not lend themselves to
Kconfig configuration of the vector to use.
HW-based kernel oops is now permanently on, there's no reason
to make it optional that I can see.
Default vectors for IPI and irq offload adjusted to not
collide.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These are now common code, all are related to user mode
threads. The rat's nest of ifdefs in ia32's arch_new_thread
has been greatly simplified, there is now just one hook
if user mode is turned on.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
z_x86_thread_page_tables_get() now works for both user
and supervisor threads, returning the kernel page tables
in the latter case. This API has been up-leveled to
a common header.
The per-thread privilege elevation stack initial stack
pointer, and the per-thread page table locations are no
longer computed from other values, and instead are stored
in thread->arch.
A problem where the wrong page tables were dumped out
on certain kinds of page faults has been fixed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>