Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This makes the gen_mmu.py script to error out if the reserved space
for page table in zephyr_prebuilt.elf is not large enough to
accommodate the generated page table. Let catch this at build time
instead of mysterious hangs when loading the page table at boot.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.
Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed. So remove
arch_mem_domain_destroy as well.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This unifies all the display of region size in hex.
Some of them are there to aid in figuring out the end of
a memory region so it is easier if they are already in hex.
This also fixes the display of address range where the end
is off by one and should be (base + size - 1).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reverts commit d40e8ede8e.
This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reverts commit 7d32e9f9a5.
We now allow the kernel to be linked virtually. This patch:
- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
in boot page tables to facilitate instruction pointer
transition, with logic to clean this up after completed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.
Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows specifying second --verbose in command line to
enable more messages. Two new ones have been added to aid
in debugging code for mapping and setting permission to
a single page.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is no need to use this kconfig, as the phys-to-virt
offset is enough to figure out if the kernel is linked in
virtual address space in gen_mmu.py.
For code, use Z_VM_KERNEL instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Since the removal of Quark-based boards, there are no user of
Minute-IA. Also, the generic x86 SoC is not exactly Minute-IA
so change it to use a fairly safe CPU_ATOM.
Fixes#14442
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
For some unknown reason, the pagetable address for _df_tss.cr3
did not get translated from virtual to physical. However,
the translation is done if the pointer to pagetable is obtained
through reference to the first array element (instead of simply
through the name of array). Without CR3 pointing to the page
table via physical address, double fault does not work. So
fixing this by being explicit with the page table pointer.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When adding a new thread to memory domain, there is a NULL check
to figure out if a thread is being migrated to another memory
domain. However, the NULL check is AFTER physical-to-virtual
address translation which means (NULL + offset) != NULL anymore.
This results in calling reset_region() with an invalid page table
pointer. Fix this by doing the NULL check before address
translation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When linking in virtual address space, we still need physical
addressed in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So identity maps the kernel in SRAM.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When the kernel is mapped into virtual address space
that is different than the physical address space,
the dynamic GDT generation uses the virtual addresses.
However, the GDT table is required at boot before
page table is loaded where the virtual addresses are
invalid. So make sure GDT generation is using
physical addresses.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is an assumption made in the page table generation code
that the kernel would occupy the same physical and virtual
addresses. However, we may want to map the kernel into
a virtual address space which differs from kernel's physical
address space. For example, with demand paging enabled on
kernel code and data, we can accommodate kernel that is
larger than physical memory size, and may want to utilize
a bigger virtual address space. So add address translation
in the gen_mmu.py script for this.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Before accessing the multiboot data passed by the bootloader,
we need to map the memory first. This adds the code to map
the memory if necessary.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.
So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code. That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes. The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.
The end result was a misaligned stack, which is surprisingly robust
for most use. But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.
Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases. We should see the raw stack boundaries where we are
setting up RSP values. Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.
Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.
Fixes#31018
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds a new GEN_ABSOLUTE_SYM_KCONFIG() specifically for
generating absolute symbols in assembly for kconfig values.
This is needed as the existing GEN_ABSOLUTE_SYM() with
constraints in extended assembly parses the "value" as
signed 32-bit integers. An unsigned 32-bit integer with
MSB set results in a negative number in the final binary.
This also prevents integers larger than 32-bit. So this
new macro simply puts the value inline within the assembly
instrcution instead of having it as parameter.
Fixes#31562
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.
The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We've already enabled full RAM mapping if ACPI is enabled, also
set a large 3GB address space size, these systems are not RAM-
constrained (they are PC platforms) and they have large MMIO
config spaces for PCIe.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped. This is because the current
SDK (0.12.1) doesn't support that target type for objcopy. However,
target is necessary for the images to be created correctly and boot.
Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.
Fixes: #31517
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).
Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Currently, zefi.py takes host GCC OBJCOPY as
default. Fixing the script to use CMAKE_C_COMPILER
and CMAKE_OBJCOPY.
Fixes: #27047
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.
Move the common symbols in one place accessible by all the architectures
and create a menu for those.
Leave the default values in the arch-specific locations.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Until now, any attempts to call printk prior to early serial init has
caused page faults due to the device not being mapped yet. Add static
variable to track the pre-init status, and instead of page faulting
just suppress the characters and log a warning right after init to
give an indication that output characters have been lost.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The original idea of using a custom switch to main thread
function is to make sure the buffer to save floating point
registers are aligned correctly or else exception would be
raised when saving/restoring those registers. Since
the struct of the buffer is defined with alignment hint
to toolchain, the alignment will be enforced by toolchain
as long as the k_thread struct variable is a dedicated,
declared variable. So there is no need for the custom
switch to main thread function anymore.
This also allows the stack usage calculation of
the interrupt stack to function properly as the end of
the interrupt stack is not being used for the dummy
thread anymore.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Convert device to DEVICE_DEFINE instead of DEVICE_AND_API_INIT
so we can deprecate DEVICE_AND_API_INIT in the future.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Fix the following complilation error that happens when specifying a
fixed MMIO address for the UART through X86_SOC_EARLY_SERIAL_MMIO8_ADDR:
arch/x86/core/early_serial.c:30:26: error: #if with no expression
30 | #if DEVICE_MMIO_IS_IN_RAM
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.
Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.
ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix compiler warnings associated with 'level' and 'entry' variables
'may be used uninitialized in this function'
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This was reporting the wrong page tables for supervisor
threads with KPTI enabled.
Analysis of existing use of this API revealed no problems
caused by this issue, but someone may trip over it eventually.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now show:
- Data pages that are paged out in red
- Pages that are mapped but non-present due to KPTI,
respectively in cyan or blue if they are identity mapped
or not.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
With kernel page table isolation (KPTI) we cannot use right exception
stack since after using trampoline stack there was always switch to
7th IST stack (__x86_tss64_t_ist7_OFFSET). Make this configurable as a
parameter in EXCEPT(nr, ist) and EXCEPT_CODE(nr, ist). For the NMI we
would use ist6 (_nmi_stack).
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
NMI can be triggered at any time, even when in the process of
switching stacks. Use special stack for it.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
range_map() now doesn't implicitly hold x86_mmu_lock, allowing
callers to use it if the lock is already held.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Provide the necessary adjustments to get MSI-X working (with or without
Intel VT-D).
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This is part of Intel VT-D and how to discover capabilities, base
addresses and so on in order to start taking advantage from it.
There is a lot to get from there, but currently we are interested only
by getting the remapping hardware base address. And more specifically
for interrupt remapping usage.
There might be more than one of such hardware so the exposed function is
made to retrieve all of them.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This will be used by MSI multi-vector implementation to connect the irq
and the vector prior to allocation.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
This is important for when we will need to atomically
un-map a page and get its dirty state before the un-mapping
completed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
currently pcie_get_mbar only returns the physical address.
This changes the function to return the size of the mbar and
the flags (IO Bar vs MEM BAR).
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
Adds a new CONFIG_MPU which is set if an MPU is enabled. This
is a menuconfig will some MPU-specific options moved
under it.
MEMORY_PROTECTION and SRAM_REGION_PERMISSIONS have been merged.
This configuration depends on an MMU or MPU. The protection
test is updated accordingly.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We should not be initializing/starting/stoping timing functions
multiple times. So this changes how the timing functions are
structured to allow only one initialization, only start when
stopped, and only stop when started.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We provide an option for low-memory systems to use a single set
of page tables for all threads. This is only supported if
KPTI and SMP are disabled. This configuration saves a considerable
amount of RAM, especially if multiple memory domains are used,
at a cost of context switching overhead.
Some caching techniques are used to reduce the amount of context
switch updates; the page tables aren't updated if switching to
a supervisor thread, and the page table configuration of the last
user thread switched in is cached.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will do until we can set up a proper page pool using
all unused ram for paging structures, heaps, and anonymous
mappings.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Help users understand how this should be tuned. Rather than
guessing wildly, set the default to 0. This needs to be tuned
on a per-board, per-application basis anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We don't need this for stacks any more and only use this
for pre-calculating the boot page tables size. Move to C
code, this doesn't need to be in headers anywhere.
Names adjusted for conciseness.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- z_x86_userspace_enter() for both 32-bit and 64-bit now
call into C code to clear the stack buffer and set the
US bits in the page tables for the memory range.
- Page tables are now associated with memory domains,
instead of having separate page tables per thread.
A spinlock protects write access to these page tables,
and read/write access to the list of active page
tables.
- arch_mem_domain_init() implemented, allocating and
copying page tables from the boot page tables.
- struct arch_mem_domain defined for x86. It has
a page table link and also a list node for iterating
over them.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Page table management for x86 is being revised such that there
will not in many cases be a pristine, master set of page tables.
Instead, when mapping memory, use unused PTE bits to store the
original RW, US, and XD settings when the mapping was made.
This will allow memory domains to alter page tables while still
being able to restore the original mapping permissions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will be needed when we support memory un-mapping, or
the same user mode page tables on multiple CPUs. Neither
are implemented yet.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In the code path for nested interrupts, we are not saving
RBX, yet the assembly code is using it as a storage location
for the ISR.
Use RAX. It is backed up in both the nested and non-nested
cases, and the ASM code is not currently using it at that
point.
Fixes: #29594
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This seems like a typo since all other places accessing bus_segs in
this context use i as the index.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
These days all threads are always a member of a memory domain,
remove this NULL check as it won't ever be false.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This function iterates over the thread's memory domain
and updates page tables based on it. We need to be holding
z_mem_domain_lock while this happens.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
fixes the following compilation errors
- sys_cache_line_size was undeclared at first use
- there was an assignment to an rvalue in arch_dcache_flush
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
Originally the EFI boot code was written to assume that all sections
in the ELF file were 8-byte aligned and sized (because I thought this
was part of some platform spec somewhere). This turned out to be
wrong in practice (at least for section sizes), so the requirement was
reduced to 4 bytes. But now we have a section being generated
somewhere that turns out to violate even that.
There's no particular value in doing those copies in big chunks.
There's at best a mild performance benefit, but if we really cared
we'd be using a more complicated memcpy() implementation anyway.
Replace the loop in the C code with a bytewise copy, change the size
field in the generated header to store bytes, and remove the
assertions (which were the failuers actually being seen in practice)
in the script that were there to detect this misalignment.
Fixes#29095
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The hardcoded APIC ID will be kept as default if the CPU is not found in
ACPI MADT.
Note that ACPI may expose more "CPUs" than there actually are
physically. Thus, make the logic aware of this possibility by checking
the enabled flas. (Non-enabled CPU are ignored).
This fixes up_squared board made of Celeron CPU.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
No need to mix super short version of names with other structures
having full name. Let's follow a more relevant naming where each and
every attribute name is self-documenting then. (such as s/id/apic_id
etc...)
Also make CONFIG_ACPI usable through IS_ENABLED by enclosing exposed
functions with ifdef CONFIG_ACPI.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
We are not RAM-constrained and there is an open issue where
exception stack overflows are not caught. Increase this size
so that options like CONFIG_NO_OPTIMIZATIONS work without
incident.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Commit 5632ee26f3 introduced an issue where in order to use MMIO
configuration:
- do_pcie_mmio_cfg is required to be true
- Only set to true in pcie_mm_init()
- Which is only called from pcie_mm_conf()
- Which is only called from pcie_conf() if do_pcie_mmio_cfg is
already true!
The end result is that MMIO configuration will never be used.
Fix the situation by moving the initialization check to pcie_conf().
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The current instrumentation point for CONFIG_TRACING added in
PR #28512 had two problems:
- If userspace and KPTI are enabled, the tracing point is simply
never run if we are resuming a user thread as the
z_x86_trampoline_to_user function is jumped to and calls
'iret' from there
- Only %rdi is being saved. However, at that location, *all*
caller-saved registers are in use as they contain the
resumed thread's context
Simplest solution is to move this up near where we update page
tables. The #ifdefs are used to make sure we don't push/pop
%rdi more than once. At that point in the code only %rdi
is in use among the volatile registers.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Tracing switched in threads in C code does not work, it needs to happen
in the arch_switch code. See also Xtensa and ARC.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Newer QEMU (5.1) hangs / timeouts on a number of tests on x86_64. In
debugging the issue this is related to a fix in QEMU 5.1 that
validates memory region access. QEMU has the APIC region only allowing
1 to 4 byte access. 64-bit access is treated as an error.
Change the APIC EOI access in locore.S back to just doing a 32-bit
access.
Fixes # 28453
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The boot code of x86_64 initializes the stack (if enabled)
with a hard-coded size for the ISR stack. However,
the stack being used does not have to be the ISR stack,
and can be any defined stacks. So pass in the actual size
of the stack so the stack can be initialized properly.
Fixes#21843
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Changes to paging code ensured that the NULL virtual page is
never mapped. Since RAM is identity mapped, on a PC-like
system accessing the BIOS Data Area in the first 4K requires
a memory mapping. We need to read this to probe the ACPI RSDP.
Additionally check that the BDA has something in it as well
and not a bunch of zeroes.
It is unclear whether this function is truly safe on UEFI
systems, but that is for another day.
Fixes: #27867
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
When probing for PCI-E device resources, it is possible that
configuration via MMIO is not available. This may caused by
BIOS or its settings. So when CONFIG_PCIE_MMIO_CFG=y, have
a fallback path to config devices via PIO. The inability to
config via MMIO has been observed on a couple UP Squared
boards.
Fixes#27339
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add initial support for X86 and get timestamps from tsc.
Authored-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.
We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.
CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.
Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In order to be possible to debug usermode threads need to be able
issue breakpoint and debug exceptions. To do this it is necessary to
set DPL bits to, at least, the same CPL level.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:
1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host
The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.
This initial work supports only X86 using uart as backend.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.
A coccinelle rule is used for this:
@r_const_dev_1
disable optional_qualifier
@
@@
-struct device *
+const struct device *
@r_const_dev_2
disable optional_qualifier
@
@@
-struct device * const
+const struct device *
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.
- Paging code now uses an array of paging level characteristics and
walks tables using for loops. This is opposed to having different
functions for every paging level and lots of #ifdefs. The code is
now more concise and adding new paging modes should be trivial.
- We now support 32-bit, PAE, and IA-32e page tables.
- The page tables created by gen_mmu.py are now installed at early
boot. There are no longer separate "flat" page tables. These tables
are mutable at any time.
- The x86_mmu code now has a private header. Many definitions that did
not need to be in public scope have been moved out of mmustructs.h
and either placed in the C file or in the private header.
- Improvements to dumping page table information, with the physical
mapping and flags all shown
- arch_mem_map() implemented
- x86 userspace/memory domain code ported to use the new
infrastructure.
- add logic for physical -> virtual instruction pointer transition,
including cleaning up identity mappings after this takes place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The address was being truncated because we were using
32-bit registers. CONFIG_MMU is always enabled on 64-bit,
remove the #ifdef.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need to produce a binary set of page tables wired together
by physical address. Add build system logic to use the script
to produce them.
Some logic for running build scripts that produce artifacts moved
out of IA32 into common CMake code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This produces a set of page tables with system RAM
mapped for read/write/execute access by supervisor
mode, such that it may be installed in the CPU
in the earliest boot stages and mutable at runtime.
These tables optionally support a dual physical/virtual
mapping of RAM to help boot virtual memory systems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
With the current identity mapping scheme a new test requires
some more memory to be set aside here.
In production this parameter gets turned per-board, and
the pending paging code overhaul in #27001 significantly
relaxes this as driver I/O mappings are no longer sparse.
Fixes a runtime failure in tests/kernel/device on
qemu_x86_64 that somehow slipped past CI.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
unify how XIP is configured across architectures. Use imply instead of
setting defaults per architecture and imply XIP on riscv arch and remove
XIP configuration from individual defconfig files to match other
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This set of functions seem to be there just because of historical
reasons, stemming from Kbuild. They are non-obvious and prone to errors,
so remove them in favor of the `_ifdef()` ones with an explicit
`CONFIG_` condition.
Script used:
git grep -l _if_kconfig | xargs sed -E -i
"s/_if_kconfig\(\s*(\w*)/_ifdef(CONFIG_\U\1\E \1/g"
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.
Based on #24467 authored by Flavio Ceolin.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
printf function didn't have enough specifiers for the
number of arguments in the command line (Coverity warning).
Fixes#26985Fixes#26986
Signed-off-by: David Leach <david.leach@nxp.com>
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
It's not safe to assume that the data section is 8-byte aligned.
Assuming 4-byte alignment seems to work however, and results in
simpler code than arbitrary alignment support.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The hardware stack overflow feature requires
CONFIG_THREAD_STACK_INFO enabled in order to distingush
stack overflows from other causes when we get an exception.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A hack was required for the loapic code due to the address
range not being in DTS. A bug was filed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This driver code uses PCIe and doesn't use Zephyr's
device model, so we can't use the nice DEVICE_MMIO macros.
Set stuff up manually instead using device_map().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This currently only supports identity paging; there's just
enough here for device_map() calls to work.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adding just the cache flush function for x86. The name
arch_cache_flush comply with API names in include/cache.h
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
The p_memsz field which indicates the size of a segment in memory
isn't always a multiple of 8. Remove the assert and add padding if
necessary. Without this change it's not possible to generate EFI
binaries out of all samples & tests in the tree.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The page table initialization needs a populated PCI MMIO
configuration, and that is lazy-evaluated. We aren't guaranteed that
a driver already hit that path, so be sure to call it explicitly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The firmware on existing devices uses HPET timer zero for its own
purposes, and leaves it alive with interrupts enabled. The Zephyr
driver now knows how to recover from this state with fuller
initialization, but that's not enough to fix the inherent race:
The timer can fire BEFORE the driver initialization happens (and does,
with certain versions of the EFI shell), thus flagging an interrupt to
what Zephyr sees as a garbage vector. The OS can't fix this on its
own, the EFI bootloader (which is running with interrupts enabled as
part of the EFI environment) has to do it. Here we can know that our
setting got there in time and didn't result in a stale interrupt flag
in the APIC waiting to blow up when interrupts get enabled.
Note: this is really just a workaround. It assumes the hardware has
an HPET with a standard address. Ideally we'd be able to build zefi
using Zephyr kconfig and devicetree values and predicate the HPET
reset on the correct configuraiton.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>