Inline some minor clarifications regarding the
Lazy Stacking feature in the cortex-m pendSV
handler, for ease of understanding. Also, fix
some minor style issues in comments.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Some of these registers may contain nuggets of information that would be
beneficial when debugging, so include them in the fault dump.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Do not hardcode the array size in the loop for printing out the floating
point registers of the exception stack frame. The size of this array
will change when Cortex-R support is added.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
When CONFIG_MULTITHREADING=n kernel specific pendsv is not used. Remove
from vector table.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
The GIC can return 0x3ff to indicate a spurious interrupt. Other
interrupt controllers could return something different. Check that the
pending interrupt is valid in order to avoid indexing past the end of
the isr_table.
This fixes#30465 and is based on the aarch64 fix in 9dd2731d.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.
Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.
Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.
Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).
The support is based on the Cortex-M33 support that already exists in
Zephyr.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.
Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.
All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed. So remove
arch_mem_domain_destroy as well.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We need to protect against concurrent modifications to page tables and
their use counts.
It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Two scenarios are possible.
privatize_page_range:
Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.
globalize_page_range:
That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.
When changing a domain mapping the range needs to be privatized first.
When changing a global mapping the range needs to be globalized last.
This way page table sharing across domains is maximized and memory
usage remains optimal.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.
Also move z_arm64_el_highest_plat_init to be called when is_el_highest
Signed-off-by: Peng Fan <peng.fan@nxp.com>
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.
Thanks to @jharris-intel for the suggestion.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.
This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.
This in particular causes problems when different
cores have different virtual timer offsets.
Signed-off-by: James Harris <james.harris@intel.com>
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.
The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.
The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.
To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.
Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.
Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.
To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.
Signed-off-by: Peng Fan <peng.fan@nxp.com>