Swap page tables at exit of exception handler if we are going to
be restored to another thread context. Or else we would be using
the outgoing thread's page tables which is not going to work
correctly due to mapping and permissions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When adding a thread to a memory domain, we need to also update
the mapped page table if it is the current running thread on
the same CPU. If it's not on the same CPU, we need to notify
the other CPUs in case the thread is running in one of them.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
After changing content of page table(s), it is needed to notify
the other CPUs that the page table(s) have been changed so they
can do the necessary steps to use the updated version. Note that
the actual way to send IPI is SoC specific as Xtensa does not
have a common way to do this at the moment.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When kernel OOPS is raised, we need to actually go through
the process of terminating the offending thread, instead of
simply printing the stack and continue running. This change
employs similar mechanism to xtensa_arch_except() to use
illegal instruction to raise hardware exception, and going
through the fatal exception path.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Trigger exception on Xtensa requires kernel privileges. Add
a new syscall that is used when ARCH_EXCEPT is invoked from userspace.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This extracts the printing of fatal exception information into
its own function to declutter xtensa_excint1_c().
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There are known exceptions which are not fatal, and we need to
handle them properly by returning to the fixup addresses as
indicated. This adds the code necessary in the exception
handler for this situation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
When MMU is enabled, we need some scratch registers to preload
page table entries. So update gen_zsr.py to that.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This changes the TLB misses handling back to the assembly
in user exception, and any page faults during TLB misses to be
handled in double exception handler. This should speed up
simple TLB miss handling as we don't have to go all the way to
the C handler.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Userspace support for Xtensa architecture using Xtensa MMU.
Some considerations:
- Syscalls are not inline functions like in other architectures because
some compiler issues when using multiple registers to pass parameters
to the syscall. So here we have a function call so we can use
registers as we need.
- TLS is not supported by xcc in xtensa and reading PS register is
a privileged instruction. So, we have to use threadptr to know if a
thread is an user mode thread.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Simplify the logic around xtensa_mmu_init.
- Do not have a different path to init part of kernel
- Call xtensa_mmu_init from C
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Replace all autorefill helpers with only one that invalidates both,
DTLB and ITLB, since that is what is really needed.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This register alias was originally introduced to allow A0 to be used
as a scratch register when handling exceptions from MOVSP
instructions. (It replaced some upstream code from Cadence that
hard-coded EXCSAVE1). Now the MMU code is now using too, and for
exactly the same purpose.
Calling it "ALLOCA" is only confusing. Rename it to make it clear
what it's doing.
Signed-off-by: Andy Ross <andyross@google.com>
TF-M only suports floating point in IPC model, not the SFN model.
Since floating point is a basic feature of the architecture and TF-M
has the limitation it makes more sense for the dependency to exist in
TF-M and and limit the TF-M model choice instead of limiting the
option to enable floating point.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Some workarounds were introduced for intel cavs2.5 platform bring up.
It is not general so move them to platform code.
Signed-off-by: Rander Wang <rander.wang@intel.com>
Each arch platform may has a general arch_cpu_idle implementation but
each vendor may has a custom one, so this config will be used for vendor
to override it.
Some workarounds were introduced for intel cavs2.5 platform bring up.
It is not general so move them to platform code.
Signed-off-by: Rander Wang <rander.wang@intel.com>
When building without optimizations and with only one core the linker
does not throw away arch_start_cpu and we get an undefined reference to
x86_ap_start
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.
Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
z_mp_entry has been removed from Xtensa architecture.
So there is no need for a function declaration. Remove it.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
NXP SYSMPU is used in other SoCs besides the Kinetis series. For
devices like S32K1xx, its bus interface clock lacks of clock gating
and it's driven by the system clock. Hence, only enable the module
clock for the Kinetis series.
Signed-off-by: Manuel Argüelles <manuel.arguelles@nxp.com>
This patch changes the section of riscv_cpu_wake_flag variable to
noinit from bss to fix hangup of RISC-V multicore boot if hart0 is
not boot hart (CONFIG_RV_BOOT_HART != 0).
Current boot sequence initializes a riscv_cpu_wake_flag to -1 but
this variable is unintentionally changed to 0 by boot hart.
This is because the variable is placed in bss section so this patch
changes the section of the variable to noinit.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
This patch fixes hangup of RISC-V multicore boot.
Currently boot sequence uses a riscv_cpu_wake_flag to notify wakeup
request for secondary core(s).
But initial value of riscv_cpu_wake_flag is undefined, so current
mechanism is going to hangup if riscv_cpu_wake_flag and mhartid of
secondary core have the same value.
This is an example situation of this problem:
- hart1: check riscv_cpu_wake_flag (value is 1) and end the loop
- hart1: set riscv_cpu_wake_flag to 0
- hart0: set riscv_cpu_wake_flag to 1
hart0 expects it will be changed to 0 by hart1 but it
has never happened
Note:
- hart0's mhartid is 0, hart1's mhartid is 1
- hart0 is main, hart1 is secondary in this example
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
The FMADD, FMSUB, FNMSUB and FNMADD instructions occupy major opcode
spaces of their own, separate from LOAD-FP/STORE-FP and OP-FP spaces.
Insert code to cover them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
elf_rela_t contains elf_rel_t exactly and contains an additional
field at the end. Therefore pointers of that type can be used for
both types, making the code generic.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Most of the public APIs in `riscv_plic.h`
(except `riscv_plic_get_irq` & `riscv_plic_get_dev`) expect the
`irq` argument to be in Zephyr-encoded format, instead of the
previously `irq_from_level_2`-stripped version. The first level
IRQ is needed by `intc_plic` to differentiate between the
parent interrupt controllers, so that correct ISR offset can be
obtained using the LUT in `sw_isr_common`.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Instead of using a macro guard to prevent functions in
`sw_isr_common.c` from getting compiled when
`CONFIG_DYNAMIC_INTERRUPTS` isn't enabled, do that in
`CMakeLists.txt` instead.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Refactor multi-level IRQ related code from `sw_isr_common.c` to
`multilevel_irq.c` to simplify `sw_isr_common` & macrologies.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Relocate new and existing internal software-managed table
access functions from the public `sw_isr_table.h` into a
private header that should only be accessed internally.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Change the internal function to `get_parent_entry`, which
returns the entire entry of table.
Store the parent interrupt controller device in the
`irq_parent_offset` table, and added 2 helper functions to:
1. determine the parent interrupt controller based on the IRQ
2. determine the IRQ of the parent interrupt controller
Declare the `struct _irq_parent_entry` in the header and added
`-` suffix to the struct so that it can be used to test the
functions in testsuites.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This change adds support for the CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER
option on Cortex-M platforms. While all Cortex-M platforms have a
NVIC controller some custom SoCs may have additional IRQ controllers
or custom handling. This change allows those SoCs to modify this
bahaviour without having to place platform specific logic inside
applications or drivers.
Signed-off-by: Corey Wharton <xodus7@cwharton.com>
The Cortex ARM documentation states that the DC IVAC instruction
requires write access permission to the virtual address (VA);
otherwise, it may generate a permission fault.
Therefore, it is needed to avoid invalidating read-only memory
after the memory map operation.
This issue has been produced by commit c9b534c.
This commit resolves the issue #64758.
Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
Some platforms already have .bss section zeroed-out externally before the
Zephyr initialization and there is no sence to zero it out the second time
from the SW.
Such boot-time optimization could be critical e.g. for RTL Simulation.
Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
This commit introduces SMP support into Cortex-A/R aarch32 architecture.
For now, this only supports multiple core start together and only allow
one CPU initialize system as primary core, others loop at the beginning
as the secondary cores and wait for wake up.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit introduce 'USE_SWITCH' feature into cortex-A/R(aarch32)
architecture
For introducing USE_SWITCH, the exception entry and exit are unified via
`z_arm_cortex_ar_enter_exc` and `z_arm_cortex_ar_exit_exc`. All
exceptions including ISR are using this way to enter and exit exception
handler.
Differentiate exception depth and interrupt depth. Allow doing
context switch when exception depth greater than 1 but not allow doing
this when interrupt depth greater than 1.
Currently, USE_SWITCH doesn't support FPU_SHARING and USERSPACE.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
Store the current CPU's struct _cpu instance into TPIDRURO, so that the
CPU core can get its struct _cpu instance by reading TPIDRURO. This is
useful in the SMP system.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
Replace the TLS base address pointer from TPIDRURO to TPIDRURW.
The difference between them is that TPIDRURO is read-only in user mode
but TPIDRURW isn't. So TPIDRURO is much more suitable for store
the address of _kernel.CPU[n]. For this reason, this commit replaces
the base pointer of the TLS area.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
MMU or MPU unit need to be initialized by its own CPU.
- Primary core initialize MMU or MPU unit in z_arm_prep_c.
- Secondary core initialize MMU or MPU unit in z_arm_secondary_start.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
Use ACPI_MADT_LOCAL_APIC instead of struct acpi_madt_local_apic. In the
same go, switch to IF_ENABLED from ifdef - slightly more readable, and
this keeps some static analyzers happy (e.g. upstream Compliance check).
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
HCR_EL2 is configured to certain value by some
loaders such as Uboot on some arm64 boards(such as roc_rk3568_pc),
When HCR_EL2.TGE, HCR_EL2.AMO and HCR_EL2.IMO bits are
set to 1, some unpredictable behaviors may occur during
zephyr boot. So we clear these bits to avoid it.
Signed-off-by: Charlie Xiong <1981639884@qq.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Locate common mpu code together with other arm / nxp mpu code in the
arch folder where it logically belongs.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Relocate multi-level interrupts APIs out of `irq.h` into
a new file named `irq_multilevel.h` to provide cleaner
separation between typical irq & multilevel ones.
Added preprocessor versions of `irq_to_level_x` as `IRQ_TO_Lx`.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This commit re-implements the SPARC V8 ABI "Flush windows" software
trap. The trap is generated by C++ compilers for exceptions and also by
the C standard library function longjmp().
There were two issues with the previous implementation:
1. It did reads and writes via the stack pointer of the trap window,
which is not defined.
2. It executed with traps enabled but without the processor run-time
state set to safely handle traps. In particular there was no valid
stack for trap processing. Even though interrupt priority was set to
highest level, the behavior at other traps was not deterministic. For
example non-maskable interrupt (15) trap or bus error trap for
instruction fetch.
This new implementation does not store backup copies of CPU registers to
the stack, and it executes with traps disabled.
Fixes#63901
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Add pm cpu ops to call the platform specific implementations for
bringing up secondary cores.
Signed-off-by: Lingutla Chandrasekhar <quic_lingutla@quicinc.com>
HAS_DTS has become a redundant option. All Zephyr architectures now
select this option, meaning devicetree has become a de-facto
requirement. In fact, if any board does not provide a devicetree
source, the build system uses an empty stub, meaning the devicetree
machinery always runs.
Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
_k_neg_eagain is there for used in assembly where including
errno.h is not possible. However, the usage in ARM was simply
to assign value to swap_return_value in a C file, which is
no need to use _k_neg_eagain as errno.h can be included.
So change that to use -EAGAIN directly. Saves 4 bytes in
built binaries in rodata.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The exclusive load/store instructions don't work well when MMU and cache
are disabled on some cores e.g. Cortex-A72. Change it to voting lock[1]
to select the primary core when multi-cores boot simultaneously.
The voting lock has reasonable but minimal requirements on the memory
system.
[1] https://www.kernel.org/doc/html/next/arch/arm/vlocks.html
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This provides custom memory range check functions as
it gets a bit complicated with cached/uncached regions.
These functions are marked as __weak so SoC or board
can override these if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There were a few small nits that were pointed out in the initial PR.
Fixes the LOG_ macro in the arm elf implementation, replaces a few stray
mentions of modules in llext.h with extensions.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Introduce an optional hook to be called when the CPU is made idle.
If needed, this hook can be used to prepare data for upcoming
call to z_arm_on_enter_cpu_idle(). The main difference is that
z_arm_on_enter_cpu_idle_prepare() hook is called before interrupts
are disabled.
Signed-off-by: Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Adds the linkable loadable extensions (llext) subsystem which provides
functionality for reading, parsing, and linking ELF encoded executable
code into a managed extension to the running elf base image.
A loader interface, and default buffer loader implementation,
make available to the llext subsystem the elf data. A simple management
API provide the ability to load and unload extensions as needed. A shell
interface for extension loading and unloading makes it easy to try.
Adds initial support for armv7 thumb built elfs with very specific
compiler flags.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
Co-authored-by: Chen Peng1 <peng1.chen@intel.com>
Co-authored-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Armv8-m baseline support various instruction carrying exclusive-monitor and
acquire-release semantic. By adding this guard we let armv8-m.baseline
fall-back to arch defined or compiler built-in support for atomic
operations.
Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
Add an ARCH_SUPPORTS_ROM_START kconfig symbol to mark architectures
that support ROM_START as an argument to zephyr_linker_sources.
This was added so that features relying on this feature could
depend on this kconfig symbol.
Signed-off-by: Yonatan Schachter <yonatan.schachter@gmail.com>
This adds support for using coredump with Xtensa DC233C core,
which are being used by qemu_xtensa and qemu_xtensa_mmu.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The old CONFIG_UART_NS16550_ACCESS_IOPORT has been used to
indicate whether to access the NS16550 UART via IO port
before device tree is used to describe hardware. Now we have
device tree, and we can specify whether a particular UART
needs to be accessed via IO port using property io-mapped.
Therefore, CONFIG_UART_NS16550_ACCESS_IOPORT is no longer
needed (and thus also CONFIG_UART_NS16550_SIMULT_ACCESS).
Remove these two kconfigs and modify code to use device tree
to figure out how to access the UART hardware.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
arch_switch() is basically an alias to xtensa_switch() so
we can mark arch_switch() as ALWAYS_INLINE to avoid another
function call, especially when no optimization is used when
debugging.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Let's make this official: we use the suffix `_MASK` for the define
carrying the GENMASK for the attributes, and the suffix `_GET(x)` for
the actual macro extracting the attributes.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
To make the stack guard works well, clean and refine the MPU code. To
save the MPU regions (the number of MPU regions are limited), we choose
to remove the guard region. Comparing to add an individual region to
guard the stack, removing the guard region can at least save 2 regions
per core.
Similarly with userspace, the stack guard will leverage the dynamic
regions switching mechanism which means we need a region switch during
the context switch. Otherwise, the other option is using stack guard
region, but this is very limited since the number of MPU regions is
limited.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Refactor the stack relevant macros to prepare to introduce the stack
guard. Also add comments about the changes related to stack layout.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add the stack check function z_arm64_stack_corruption_check at
z_arm64_fatal_error to handle the stack overflow triggered by the
hardware region.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Introduce the ARM64_STACK_PROTECTION config. This option leverages the
MMU or MPU to cause a system fatal error if the bounds of the current
process stack are overflowed. This is done by preceding all stack areas
with a fixed guard region. The config depends on MPU for now since MMU
stack protection is not ready.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Clean the thread->arch during the arch_new_thread to avoid unexpected
behavior. If the thread struct is allocated from heap or in stack, the
data in thread->arch might be dirty.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Accessing mem before mmu or mpu init will cause a cache coherence issue.
To avoid such a problem, move the safe exception stack init function
after the mmu or mpu is initiated.
Also change the data section attribute from INNER_SHAREABLE to
OUTER_SHAREABLE. Otherwise there will be a cache coherence issue during
the memory regions switch. Because we are using background region to do
the regions switch, and the default background region is
OUTER_SHAREABLE, if we use INNER_SHAREABLE as the foreground region,
then we have to flush all cache regions to make sure the cached values
are right. However, flushing all regions is too heavy, so we set
OUTER_SHAREABLE to fix this issue.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Dom0less is Xen mode without privileged domain. All guests are created
according to hypervisor device tree configuration on boot. Thus, there
is no Dom0 with console daemon, that usually manages console output
from domains.
Zephyr OS contains 2 serial drivers related to Xen hypervisor: regular
with console shared page and consoleio-based. The first one is for
setups with console daemon and usually was used for Zephyr DomU guests.
The second one previously was used only for Zephyr Dom0 and had
corresponding Kconfig options. But consoleio is also used as interface
for DomU output on Dom0less setups and should be configurable without
XEN_DOM0 option.
Add corresponding XEN_DOM0LESS config to Xen Kconfig files and proper
dependencies in serial drivers.
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
Co-authored-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
This is the final step in making the `zephyr,memory-attr` property
actually useful.
The problem with the current implementation is that `zephyr,memory-attr`
is an enum type, this is making very difficult to use that to actually
describe the memory capabilities. The solution proposed in this PR is to
use the `zephyr,memory-attr` property as an OR-ed bitmask of memory
attributes.
With the change proposed in this PR it is possible in the DeviceTree to
mark the memory regions with a bitmask of attributes by using the
`zephyr,memory-attr` property. This property and the related memory
region can then be retrieved at run-time by leveraging a provided helper
library or the usual DT helpers.
The set of general attributes that can be specified in the property are
defined and explained in
`include/zephyr/dt-bindings/memory-attr/memory-attr.h` (the list can be
extended when needed).
For example, to mark a memory region in the DeviceTree as volatile,
non-cacheable, out-of-order:
mem: memory@10000000 {
compatible = "mmio-sram";
reg = <0x10000000 0x1000>;
zephyr,memory-attr = <( DT_MEM_VOLATILE |
DT_MEM_NON_CACHEABLE |
DT_MEM_OOO )>;
};
The `zephyr,memory-attr` property can also be used to set
architecture-specific custom attributes that can be interpreted at run
time. This is leveraged, among other things, to create MPU regions out
of DeviceTree defined memory regions on ARM, for example:
mem: memory@10000000 {
compatible = "mmio-sram";
reg = <0x10000000 0x1000>;
zephyr,memory-region = "NOCACHE_REGION";
zephyr,memory-attr = <( DT_ARM_MPU(ATTR_MPU_RAM_NOCACHE) )>;
};
See `include/zephyr/dt-bindings/memory-attr/memory-attr-mpu.h` to see
how an architecture can define its own special memory attributes (in
this case ARM MPU).
The property can also be used to set custom software-specific
attributes. For example we can think of marking a memory region as
available to be used for memory allocation (not yet implemented):
mem: memory@10000000 {
compatible = "mmio-sram";
reg = <0x10000000 0x1000>;
zephyr,memory-attr = <( DT_MEM_NON_CACHEABLE |
DT_MEM_SW_ALLOCATABLE )>;
};
Or maybe we can leverage the property to specify some alignment
requirements for the region:
mem: memory@10000000 {
compatible = "mmio-sram";
reg = <0x10000000 0x1000>;
zephyr,memory-attr = <( DT_MEM_CACHEABLE |
DT_MEM_SW_ALIGN(32) )>;
};
The conventional and recommended way to deal and manage with memory
regions marked with attributes is by using the provided `mem-attr`
helper library by enabling `CONFIG_MEM_ATTR` (or by using the usual DT
helpers).
When this option is enabled the list of memory regions and their
attributes are compiled in a user-accessible array and a set of
functions is made available that can be used to query, probe and act on
regions and attributes, see `include/zephyr/mem_mgmt/mem_attr.h`
Note that the `zephyr,memory-attr` property is only a descriptive
property of the capabilities of the associated memory region, but it
does not result in any actual setting for the memory to be set. The
user, code or subsystem willing to use this information to do some work
(for example creating an MPU region out of the property) must use either
the provided `mem-attr` library or the usual DeviceTree helpers to
perform the required work / setting.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Add Xen domctl API implementation for Zephyr as control domain.
Previously Zephyr OS was used as unprivileged Xen domain (Domain-U),
but it also can be used as lightweight Xen control domain (Domain-0).
To implement such fuctionality additional Xen interfaces are needed.
One of them is Xen domain controls (domctls) - it allows to create,
configure and manage Xen domains.
Also, used it as a possibility to update files copyright and licenses
identifiers in touched files.
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
Xen-related Kconfig options were highly dependand on BOARD/SOC xenvm.
It is not correct because Xen support may be used on any board and SoC.
So, Kconfig structure was refactored, now CONFIG_XEN is located in
arch/ directory (same as in Linux kernel) and can be selected for
any Cortex-A arm64 setup (no other platforms are currently supported).
Also remove confusion in Domain 0 naming: Domain-0, initial domain,
Dom0, privileged domain etc. Now all options related to Xen Domain 0
will be controlled by CONFIG_XEN_DOM0.
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
After 79d0bf39b8 was merged, the inclusion
of <zephyr/acpi/acpi.h> with CONFIG_ACPI=n caused a build failure because
<acpica/source/include/acpi.h> could no longer be included due to the
inlcude path not being injected anymore.
Fix this by guarding the header inclusion when CONFIG_ACPI
is not set.
Fixes#62679.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
This commit provides the users a way to disconnect dynamic
interrupts. This is needed because we don't want to keep
piling up ISR/arg pairs until the number of registrable
clients is reached.
This feature is only relevant for shared and dynamic interrupts.
Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
This works by overwriting z_isr_install()'s definition
(possible since the symbol is now weak) with our own definiton.
Whenever trying to register a new ISR/arg pair, z_isr_install()
will check to see if the interrupt line is already in use. If it's
not then it will not share the interrupt and will work exactly
as the old z_isr_install(), meaning it will just write the new
ISR/arg pair to _sw_isr_table.
If the interrupt line is already being used by an ISR/arg pair
then that line will become shared, meaning we'll overwrite
_sw_isr_table with our own (z_shared_isr, z_shared_sw_isr_table[irq])
pair.
Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
Since the shared IRQ code will also use the same logic to compute
the _sw_isr_table index, move the computing logic to a separate
function: z_get_sw_isr_table_idx(), which can be used by other
modules.
Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
This commit introduces all the necessary changes for
enabling the usage of shared interrupts.
This works by using a second interrupt table: _shared_sw_isr_table
which keeps track of all of the ISR/arg pairs sharing the same
interrupt line. Whenever a second ISR/arg pair is registered
on the same interrupt line using IRQ_CONNECT(), the entry in
_sw_isr_table will be overwriten by a
(shared_isr, _shared_sw_isr_table[irq]) pair. In turn, shared_isr()
will invoke all of the ISR/arg pairs registered on the same
interrupt line.
This feature only works statically, meaning you can only make use
of shared interrupts using IRQ_CONNECT(). Attempting to dynamically
register a ISR/arg pair will overwrite the hijacked _sw_isr_table
entry.
Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
LLVM LLD fills empty spaces (created using ALIGN() or moving the
location counter) in executable segments with TrapInstr pattern,
e.g. for ARM the TrapInstr pattern is 0xd4d4d4d4. GNU LD fills
empty spaces with 0x00 pattern.
We may want to have some section (e.g. rom_start) filled with 0x00,
e.g. because MCU can interpret the pattern as a configuration data.
Signed-off-by: Patryk Duda <pdk@semihalf.com>
This commit separates kernel_arch_func.h into two header file,
'cortex_a_r/kernel_arch_func.h' and 'cortex_m/kernel_arch_func.h', it
also removes some functions which is empty.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit separates cpu_idle.S into two asm files,
'cortex_a_r/cpu_idle.S' and 'cortex_m/cpu_idle.S'.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit separates thread.c into two source files,
'cortex_a_r/thread.c' and 'cortex_m/thread.c, it also introduces some
changes.
1. Migrate 'thread.c' and 'cortex_m/thread.c'.
2. Migrate 'thread.c' and 'cortex_a_r/thread.c'
3. Remove the 'z_arm_mpu_stack_guard_and_fpu_adjust' function as this is
obviously written for Cortex-M architecture.
4. Remove the 'z_arm_prepare_switch_to_main' function as this is only
used by Cortex-M architecture.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit Separate 'prep_c.c' into two file based on the architecture,
one is 'cortex_m/prep_c.c', the other is 'cortex_a_r/prep_c.c'
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
There are too many differences between Cortex-A/R and Cortex-M on irq
code, e.g. Cortex-A/R use GIC and Cortex-M uses NVIC. For reducing
the complexity and easier to maintain, this commit separates irq_manage.c
and isr_wrapper.S into two different parts based on the architecture.
This commit also Removes the part related to the option
'CONFIG_ARM_SECURE_FIRMWARE' in 'cortex_a_r/irq_manage.c' because
this code is written for the Cortex-M architecture.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
There are too many differences between Cortex-A/R and Cortex-M on swap
code. For reducing the complexity and easier to maintain, this commit
introduces the following major changes
1. Separate swap.c and swap_helper.S into two different parts based on
the architecture.
2. Rename 'z_arm_pendsv' to 'z_arm_do_swap' for Cortex-A/R.
3. Removes the part related to the option 'CONFIG_BUILTIN_STACK_GUARD'
in 'cortex_a_r/swap_helper.S' because this code is written for
the Cortex-M architecture.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
'irq_relay.S' is only used for Arm Cortex-M architecture, so it's better
to place it to the 'arch/arm/core/cortex_m' directory.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit follows the parent commit work.
This commit introduces the following major changes.
1. Move all directories and files in 'include/zephyr/arch/arm/aarch32'
to the 'include/zephyr/arch/arm' directory.
2. Change the path string which is influenced by the changement 1.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
It doesn't make sense to keep the aarch32 directory in the
'arch/arm/core' directory as the aarch64 has been moved out.
This commit introduces the following major changes.
1. Move all directories and files in 'arch/arm/core/aarch32' to
'arch/arm/core' and remove the 'arch/arm/core/aarch32' directory.
2. Move all directories and files in 'arch/include/aarch32' to
'arch/include' and remove the 'arch/include/aarch32' directory.
3. Remove the nested including in the 'arch/include/kernel_arch_func.h'
and 'arch/include/offsets_short_arch.h' header files.
4. Change the path string which is influenced by the changement 1
and 2.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
The prep_c.c source doesn't depend on the ACPI library. As ACPI support
now requires the ACPICA module, the extra header breaks projects using
x86 without ACPI support.
Signed-off-by: Keith Short <keithshort@google.com>
The old acpi implimentation is replaced with acpica interface
and updated x86 arch porting with the new interface.
Signed-off-by: Najumon B.A <najumon.ba@intel.com>
RISC-V Spec requires minimum alignment of trap handling code to be
dependent from MTVEC.BASE field size. Minimum alignment for RISC-V
platforms is 4 bytes, but maximum is platform or application-specific.
Currently there is no common approach to align the trap handling
code for RISC-V and some platforms use custom wrappers to align
_isr_wrapper properly.
This change introduces a generic solution,
CONFIG_RISCV_TRAP_HANDLER_ALIGNMENT configuration option which sets
the alignment of a RISC-V trap handling code.
The existing custom solutions for some platforms remain operational,
since the default alignment is set to minimal possible (4 bytes)
and will be overloaded by potentially larger alignment of custom solutions.
Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
Add a property to the native_simulator target, to collect
the libraries we want to link with the runner, instead of
abusing the link options to pass them.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
As per #26393, Local APIC is using Kconfig based option for
the base address. This patch adds DTS binding support in the driver,
just like its conunter part I/O APIC.
Signed-off-by: Umar Nisar <umar.nisar@intel.com>
Disable d-cache on the fvp_baser_aemv8r_aarch32 platform because there
are some issues when d-cache is enabled for now.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This header is private and included only in architecture code, no need for
it to be in the top of the public include directory.
Note: This might move to a more private location later. For now just
cleaning up the obvious issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
During Zephyr boot with SMP enabled,
while z_smp_init is not completed yet and only boot core is running,
incorrect code in broadcast_ipi API will cause following:
1. Generate IPI even if other cores are not booted.
2. Incorrect setting of sgi1r register.
3. All the affinity(1/2/3) value will be incorrect.
Signed-off-by: Chirag Kochar <chirag.kochar@intel.com>