Ensure callee registers included in coredump.
Push callee registers onto stack and pass as param to
z_do_kernel_oops for CONFIG_ARMV7_M_ARMV8_M_MAINLINE
when CONFIG_EXTRA_EXCEPTION_INFO enabled.
Signed-off-by: Mark Holden <mholden@fb.com>
Debugger plugins use the `z_sys_post_kernel` variable to detect whether
the kernel is currently running, and hence whether any threads exist. As
this is just a standard variable however, after a reset the initial
value of this variable is whatever it was before reset (true) until the
bss section is zeroed halfway through `z_arm_prep_c`. Debuggers are
therefore unable to differentiate between a normally running application
and the very first stages of the boot process.
Clearing this variable as the first action upon reset allows debuggers
to display the correct thread state after the first 3 instructions have
run.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds lazy floating point context switching. On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away. If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored. If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.
The undefined instruction handler is responsible for saving away the
floating point context if needed. If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved. Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
This commit updates the Cortex-R reset routine to initialise
(synchronise) the VFP D16-D31 registers when Dual-redundant Core
Lock-step (DCLS) is enabled.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Grouping the FPU registers together will make adding FPU support for
Cortex-A/R easier later. It provides the ability to get the sizeof and
offsetof FPU registers easier.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Cortex-A/R use a descending stack frame and the hardware does not help
with the stacking. This led to some less than desirable workarounds in
the exception code where the basic stack frame was saved twice.
Rearranging the order of the exception stack frame removes that problem
and provides a clearer path to saving CPU context in a fully descending
manner.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
This commit adds the unified floating-point configuration symbols for
the ARM architectures.
These configuration symbols allow specification of the floating-point
coprocessors, such as VFP (also known as FP for Cortex-M) and NEON,
for the ARM architectures.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
V7-A also supports TPIDRURO, so go ahead and use that for TLS, enabling
thread local storage for the other ARM architectures.
Add __aeabi_read_tp function in case code was compiled to use that.
Signed-off-by: Keith Packard <keithp@keithp.com>
Commit d8f186aa4a ("arch: common: semihost: add semihosting
operations") encapsulated semihosting invocation in a per-arch
semihost_exec() function. There is a fixed register variable declaration
for the return value but this variable is not listed as an output
operand to respective inline assembly segments which is an error.
This is not reported as such by gcc and the generated code is still OK
in those particular instances but this is not guaranteed, and clang
does complain about such cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add an API that utilizes the ARM semihosting mechanism to interact with
the host system when a device is being emulated or run under a debugger.
RISCV is implemented in terms of the ARM implementation, and therefore
the ARM definitions cross enough architectures to be defined 'common'.
Functionality is exposed as a separate API instead of syscall
implementations (`_lseek`, `_open`, etc) due to various quirks with
the ARM mechanisms that means function arguments are not standard.
For more information see:
https://developer.arm.com/documentation/dui0471/m/what-is-semihosting-
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
impl
With GCC 11 now supporting low overhead branching in ARMv8.1, ASM "LE"
(loop-end) instructions would trigger an INVSTATE hard-fault after
FPSCR was set to 0. This was due to the FPSCR getting a new field in
ARMv8.1. LTPSIZE is now set to it's reset value of Tail predication not
applied.
Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
The Cache is an optional configuration of both the ARM Cortex-M7 and
Cortex-M55. Previously, it was just checking that it was just an M7
rather than knowing that the CPU actually was built with the cache.
Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
This commit changes the CODE_DATA_RELOCATON dependency by
adding CPU_AARCH32_CORTEX_R next to CPU_CORTEX_M.
Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
Cortex-M code is the only flavor that supports switching between secure
and non-secure state so make sure this kconfig only applies to it.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Commit a2cfb8431d ("arch: arm: Add code for swapping threads between
secure and non-secure") changed the mode variable in the _thread_arch to
be defined by ARM_STORE_EXC_RETURN or USERSPACE. The generated offset
define for mode was enabled by FPU_SHARING or USERSPACE. This broke
Cortex-R with FPU, but with ARM_STORE_EXC_RETURN disabled. Reconcile
the checks.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
This is a strange one: The printing code pushes a floating point
register, and is called during the mpu falt. If the floating point
registers are lazily stacked, this fp push can cause another mpu
fault to be pending during the current mpu fault, and tail chained
without returning to PendSV. Since we're already cleaning up the
fp execption reason, we might as well also clean up thisp pending,
spurious mpu exception.
Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
If an SVC was pending during the stack overflow, it will run
after the return of the memory manage fault. To the SVC's misfortune of
the SVC handler, the it's invariant, that PSP point to the
hardware-stacked context is no longer valid. When the user has a
k_sys_fatal_error_handler that tries to kill the thread that caused a
stack overflow, this manifests as the svc reading the memory of whatever
is on the stack after being adjusted by the mem manage fault handler, and
that leads to unending, spurious hard faults, locking up the system.
This patch prevents that.
Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
The incorrect sequence will cause the thread cannot be aborted in the
ISR context. The following test case failed:
tests/kernel/fatal/exception/kernel.common.stack_sentinel.
The stack sentinel detects the stack overflow as normal during a timer
ISR exit. Note that, currently, the stack overflow detection is behind
the context switch checking, and then the detection will call svc to
raise a fatal error resulting in increasing the nested counter(+1). At
this point, it needs a context switch to finally abort the thread.
However, after the fatal error handling, the program cannot do a context
switch either during the svc exit[1], or during the timer ISR exit[2].
[1] is because the svc context is in an interrupt nested state (the
nested counter is 2).
[2] is because the current point (after svc context pop out) is right
behind the switch checking.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
ARMv8-R allows to set the vector table address using VBAR
register, so there is no need to relocate it.
Move away vector_table setting from reset.S and move it to
relocate vector table function as it's done for Cortex-M
CPU.
Signed-off-by: Julien Massot <julien.massot@iot.bzh>
The ARMv8-R processors always boot into Hyp mode (EL2)
To enter EL1:
Program the HACTLR register because it defaults
to only allowing EL2 accesses. HACTLR controls
whether EL1 can access memory region registers and CPUACTLR.
Program the SPSR before entering EL1.
Other registers default to allowing accesses at EL1 from reset.
Set VBAR to the correct location for the vector table.
Set ELR to point to the entry point of the EL1 code and call ERET.
Signed-off-by: Julien Massot <julien.massot@iot.bzh>
According to Kconfig guidelines, boolean prompts must not start with
"Enable...". The following command has been used to automate the changes
in this patch:
sed -i "s/bool \"[Ee]nables\? \(\w\)/bool \"\U\1/g" **/Kconfig*
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Change the CPU_CORTEX_R kconfig option to CPU_AARCH32_CORTEX_R to
distinguish the armv7 version from the armv8 version of Cortex-R.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
This was introduced when trying to fix a previous merge conflict. It
broke userspace tests on nucleo_l073rz.
Fixes#42627
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
These functions help the code to be more self-documenting. Use them to
make the code's intent clearer.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Replace CONFIG_CPU_CORTEX_R with CONFIG_ARMV7_R since it is clearer with
respect to the difference between v7 and v8 Cortex-R.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
When calling a syscall, the SVC routine will now elevate the thread to
privileged mode and exit the SVC setting the return address to the
syscall handler. When the thread is swapped back in, it will be running
z_do_arm_syscall in system mode. That function will run the syscall
then automatically return the thread to usr mode.
This allows running the syscall in sys mode on a thread so that we can
use syscalls that sleep without doing unnatural things. The previous
implementation would enable interrupts while still in the SVC call and
do weird things with the nesting count. An interrupt could happen
during this time when the syscall was still in the exception state, but
the nested count had been decremented too soon. Correctness of the
nested count is important for future floating point unit work.
The Cortex-R behavior now matches that of Cortex-M.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Fix the assert that checks for existence of a cycle counter.
The field is named NO CYCCNT, so when it is 1, there is no cycle
counter. But we are asserting the opposite.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
A Cortex-M specific function (sys_clock_isr()) was defined as a weak
function, so in practice it was always available when system clock was
enabled, even if no Cortex-M systick was available. This patch
introduces an auxiliary Kconfig option that, when selected, the ISR
function gets installed. External SysTick drivers can also make use of
this function, thus achieving the same functionality offered today but
in a cleaner way.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Use sys_clock_hw_cycles_per_sec() instead of
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC to determine clock cycles.
Signed-off-by: Michel Haber <michel-haber@hotmail.com>
Modify #ifdefs so that any code that is compiled if CONFIG_ARMV7_R is
set is also compiled if CONFIG_ARMV7_A is set.
Modify #ifdefs so that any code that is compiled if CONFIG_CPU_CORTEX_R
is set is also compiled if CONFIG_CPU_AARCH32_CORTEX_A is set.
Modify source dir inclusion in CMakeLists.txt accordingly.
Brief file descriptions have been updated to include Cortex-A whereever
only Cortex-M and Cortex-R were mentioned so far.
Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
An initial implementation for memory management using the ARMv7 MMU.
A single L1 translation table for the whole 4 GB address space is al-
ways present, a configurable number of L2 page tables are linked to
the L1 table based on the static memory area configuration at boot
time, or whenever arch_mem_map/arch_mem_unmap are called at run-time.
Currently, a CPU with the Multiprocessor Extensions and execution at
PL1 are always assumed. Userspace-related features or thread stack
guard pages are not yet supported. Neither are LPAE, PXN or TEX re-
mapping. All mappings are currently assigned to the same domain. Re-
garding the permissions model, access permissions are specified using
the AP[2:1] model rather than the older AP[2:0] model, which, accor-
ding to ARM's documentation, is deprecated and should no longer be
used. The newer model adds some complexity when it comes to mapping
pages as unaccessible (the AP[2:1] model doesn't support explicit
specification of "no R, no W" permissions, it's always at least "RO"),
this is accomplished by invalidating the ID bits of the respective
page's PTE.
Includes sources, Kconfig integration, adjusted CMakeLists and the
modified linker command file (proper section alignment!).
Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
The configuration bits ATCMPCEN, B0TCMPCEN and B1TCMPCEN in the ACTLR
register referenced in the function z_arm_tcm_disable_ecc are only de-
fined for Cortex-R CPUs. For Cortex-A CPUs, those bits are declared
as reserved.
Comp.: https://arm-software.github.io/CMSIS_5/Core_A/html/group__CMSIS__ACTLR.html
Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
There are two macros for declaring stack arrays:
K_KERNEL_STACK_ARRAY_DEFINE:
Defines the array, allocating storage and setting the section name
K_KERNEL_STACK_ARRAY_EXTERN
Declares the name of a stack array allowing code to reference
the array which must be defined elsewhere
arch/arm/include/aarch32/cortex_m/stack.h was mis-using
K_KERNEL_STACK_ARRAY_DEFINE to declare z_interrupt_stacks by sticking
'extern' in front of the macro use. However, when this macro also set
the object file section for the symbol, having two of those caused a
conflict in the compiler due to the automatic unique name mechanism used
for sections to allow unused symbols to be discarded during linking.
This patch makes the header use the correct macro.
Signed-off-by: Keith Packard <keithp@keithp.com>
The assert log of z_priv_stacks_ram_start failed to build due to passing
&z_priv_stacks_ram_start instead of just z_priv_stacks_ram_start.
Fixes#39190
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This commit adds the half-precision (16-bit) floating-point
configurations to the ARM AArch32 architectures.
Enabling CONFIG_FP16 has the effect of specifying `-mfp16-format`
option (in case of GCC) which allows using the half-precision floating
point types such as `__fp16` and `_Float16`.
Note that this configuration can be used regardless of whether a
hardware FPU is available or supports half-precision operations.
When an FP16-capable FPU is not available, the compiler will
automatically provide the software emulations.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
ld linker will only resolve undefined symbols inside functions that is
actually being called.
However, not all linkers behaves this way. Certain linkers, for example
armlink, resolves all undefined symbols even if during a later stage at
the linking the function will be pruned.
Therefore `ifdef CONFIG_GEN_ISR_TABLES` has been placed to safeguard
functions that will call undefined symbols when CONFIG_GEN_ISR_TABLES=y.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
z_arm_do_syscall is only defined and used when CONFIG_USERSPACE=y.
Defining the symbol z_arm_do_syscall in assembly without a corresponding
implementation is fine for GNU ld as long as the function is not
actively called, but armlink fails to link in such cases.
Safegaurd GTEXT(z_arm_do_syscall) so the symbol is only referenced when
actively used, that is when CONFIG_USERSPACE=y.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.
The symbols are named according to the section name they describe.
Section name is `ramfunc`
The following symbols are aligned in this commit:
- _ramfunc_ram_start -> __ramfunc_start
- _ramfunc_ram_end -> __ramfunc_end
- _ramfunc_ram_size -> __ramfunc_size
- _ramfunc_rom_start -> __ramfunc_load_start
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Related to github #22290. Getting interrupt during mpu buffer validate
is corrupting index register. Fix applied to ARC is to disable
interrupts during the buffer validate operation.
Signed-off-by: Phil Erwin <phil.erwin@lexmark.com>
Cortex-A/R does not have hardware supported nested interrupts, but it is
easily emulatable using the nesting level stored in the kernel
structure.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Add functionality based on Cortex-M that enables recovery from a data
abort using zephyr's exception recovery framework. If there is a
registered z_exc_handle for a function, then use its fixup address if
that function aborts.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
With the addition of userspace support, Cortex-R needs to use SVC calls
to handle oops exceptions. Add that support by defining ARCH_EXCEPT to
do a svc call.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
The user thread cannot be trusted so do not use the stack pointer it
passes in. Use the thread's privilege stack when in privileged modes to
make sure a user thread does not trick the svc/isr handlers into writing
to memory it should not.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
This commit adds the ARMv8.1-M M-Profile Vector Extension (MVE)
configurations as well as the compiler flags to enable it.
The M-Profile Vector Extension consists of the MVE-I and MVE-F
instruction sets which are integer and floating-point vector
instruction sets, respectively.
The MVE-I instruction set is a superset of the ARM DSP instruction
set (ARMv7E-M) and therefore depends on ARMV8_M_DSP, and the MVE-F
instruction set is a superset of the ARM MVE-I instruction set and
therefore depends on ARMV8_1_M_MVEI.
The SoCs that implement the MVE instruction set should select the
following configurations:
select ARMV8_M_DSP
select ARMV8_1_M_MVEI
select ARMV8_1_M_MVEF (if floating-point MVE is supported)
The GCC compiler flags for the MVE instruction set are specified
through the `-mcpu` flag.
In case of the Cortex-M55 (the only supported processor type for
ARMv8.1-M at the time of writing), the `-mcpu=cortex-m55` flag, by
default, enables all the supported extensions which are DSP, MVE-I and
MVE-F.
The extensions that are not supported can be specified by appending
`+no(ext)` to the `-mcpu=cortex-m55` flag:
-mcpu=cortex-m55 Cortex-M55 with DSP + MVE-I + MVE-F
-mcpu=cortex-m55+nomve.fp Cortex-M55 with DSP + MVE-I
-mcpu=cortex-m55+nomve Cortex-M55 with DSP
-mcpu=cortex-m55+nodsp Cortex-M55 without any extensions
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
The TLS global pointer is only set during context switch.
So for the first switch to main thread, the TLS pointer
is NULL which would cause access violation when trying
to access any thread local variables in main thread.
Fix it by setting it before going into main thread.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Cleanup an #ifdef statement in swap_helper.S; use
ARMV6_M_ARMV8_M_BASELINE instead of listing all
Cortex-M baseline implementation variants. This
fixes an issue with Cortex-M23 whose Kconfig
define was not included in the original list.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When inside an escalated HardFault, we would like to get
more information about the reason for this escalation. We
first check if the reason for thise escalation is an SVC,
which occurs within a priority level that does not allow
it to trigger (e.g. fault or another SVC). If this is true
we set the error reason according to the provided argument.
Only when this is not a synchronous SVC that caused the HF,
do we check the other reasons for HF escalation (e.g. a BF
inside a previous BF).
We also add a case for a debug event, to complete going through
the available flags in HFSR.
Finally we ASSERT if we cannot find the reason for the escalation.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Move the assessment of a synchronous SVC error into a
separate function. This commit does not introduce any
behavioral changes.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Clean up a few more hard-coded constants
in swap_helper.S and replace them with
CMSIS-like defines in cpu.h. No behavioral
changes in this commit.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When locking interrupt in a critical session, it is
safer to do MSR BASEPRI_MAX instead of BASEPRI. The
rationale is that when writing to BASEPRI_MAX, the
writing is conditional, and is only applied if the
change is to a higher priority level. This commit
replaces BASEPRI with BASEPRI_MAX in operations that
aim to lock some specific interrupts:
- irq_lock()
- masking out PendSV
So, for example, it is not possible to actually
unmask any interrupts by doing an irq_lock operation.
The commit does not introduce behavioral changes.
However, it makes irq_lock() more robust against
future changes to the IRQ locking mechanism.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Baseline Cortex-M requires VTOR to be aligned on 64-word
boundary. That is because bit-7 of VTOR is also RAZ/WI.
The commit updates the vector table section alignment for
Baseline Cortex-M to reflect the implementation constraint.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Platform specific initialization during early boot
has been a feature supported only by Cortex-M; the
Kconfig symbol is define in arch/arm Kconfig space.
We rename the z_platform_init() function to
z_arm_platform_init(), to indicate more clearly that
this is an internal, private ARM-only API.
This commit does not introduce behavioral changes.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Platform specific initialization during early boot
has been a feature supported only by Cortex-M; the
Kconfig symbol is defined in arch/arm Kconfig space.
We rename the z_platform_init() function to
z_arm_platform_init(), to indicate more clearly that
this is an internal, private ARM-only API.
This commit does not introduce behavioral changes.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If the DebugMonitor extension is implemented by the core,
the interrupt may be pended and become active, even if it
is not enabled. Set the priority level of DebugMonitor upon
system initialization to the intended value unconditionally
so we do not end up in undefined behavior, if the exception
is accidentally pended. Since the priority level is set at
init, we can remove resetting the priority in DWT driver
initialization.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When the SoC implements SysTick, but the system
does not use it as the driver for system timing
we still need to set its interrupt level. This
is because the SysTick IRQ is always enabled,
so we must ensure the interrupt priority is set
to a level lower than the kernel interrupts (for
the assert mechanism to work properly) in case
the SysTick interrupt is accidentaly raised.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If the PendSV interrupt is not used by Zephyr (this is
the case when we build with single-thread support) we
route the interrupt to z_arm_exc_spurious, instead of
assigning 0 to the vector table entry. This is because
the interrupt is always enabled and always exists, so
it is safer to always get the proper error report, in
case we accidentally pend the PendSV, for any reason.
We also add a comment in the PendSV priority setting,
explaining why it has to be assigned a priority level
even if it is not used.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Create z_arm_preempted_thread_in_user_mode to abstract the
implementation differences between Cortex-M and R to determine if an
exception came from userspace.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Create z_arm_thread_is_user_mode to abstract the implementation
differences between Cortex-M and R to determine if the current thread is
in user or kernel mode.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.
Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.
Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
Also, this eases readability.
The new API can be used any time all FP registers must be manually
saved and restored for an operation.
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Most of the code for the three exception functions is identical so use
macros to make things easier to read.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Use the context switch macro for z_arm_cortex_r_svc to be more clear
about the svc call being executed.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Shrink the name of the hidden cortex-m option for the
null-pointer dereference detection feature.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Reduce the length of the Kconfig defines related to
null-pointed dereference detection in Cortex-M.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
In case CONFIG_NOCACHE_MEMORY=y, the D-Cache need to be clean and
invalidated before enabling the MPU to make sure no data from a
__nocache__ region is present in the D-Cache.
If the D-Cache is disabled, SCB_CleanInvalidateDCache() shall not be
used as it might contains random data for random addresses, and this
might just create a bus fault.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
On reset we do not know what is the status of the D-Cache, nor its
content.
If it is disabled, do not try to clean it, as it might contains random
data for random addresses, and this might just create a bus fault.
Invalidating it is enough.
If it is enabled, it means its content is not random.
SCB_InvalidateDCache() will clean it, invalidate it and disable it.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.
Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.
CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Inline some minor clarifications regarding the
Lazy Stacking feature in the cortex-m pendSV
handler, for ease of understanding. Also, fix
some minor style issues in comments.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Some of these registers may contain nuggets of information that would be
beneficial when debugging, so include them in the fault dump.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Do not hardcode the array size in the loop for printing out the floating
point registers of the exception stack frame. The size of this array
will change when Cortex-R support is added.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
When CONFIG_MULTITHREADING=n kernel specific pendsv is not used. Remove
from vector table.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
The GIC can return 0x3ff to indicate a spurious interrupt. Other
interrupt controllers could return something different. Check that the
pending interrupt is valid in order to avoid indexing past the end of
the isr_table.
This fixes#30465 and is based on the aarch64 fix in 9dd2731d.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.
Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.
Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.
Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).
The support is based on the Cortex-M33 support that already exists in
Zephyr.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.
Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.
All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed. So remove
arch_mem_domain_destroy as well.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We need to protect against concurrent modifications to page tables and
their use counts.
It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Two scenarios are possible.
privatize_page_range:
Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.
globalize_page_range:
That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.
When changing a domain mapping the range needs to be privatized first.
When changing a global mapping the range needs to be globalized last.
This way page table sharing across domains is maximized and memory
usage remains optimal.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.
Also move z_arm64_el_highest_plat_init to be called when is_el_highest
Signed-off-by: Peng Fan <peng.fan@nxp.com>
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.
Thanks to @jharris-intel for the suggestion.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.
This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.
This in particular causes problems when different
cores have different virtual timer offsets.
Signed-off-by: James Harris <james.harris@intel.com>
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.
The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.
The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.
To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.
Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.
Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.
To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.
Signed-off-by: Peng Fan <peng.fan@nxp.com>
There is no strict reason to use assembly for the reset routine. Move as
much code as possible to C code using the proper helpers.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The name for registers and bit-field in the cpu.h file is incoherent and
messy. Refactor the whole file using the proper suffixes for bits,
shifts and masks.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.
To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.
We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
exceptions.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.
Extend the number of possible return values from 4 to 8.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.
For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...
This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.
Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.
One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a remove_map() to ... remove a mapping.
Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.
Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.
When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.
If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.
To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.
The implementation is a tradeoff between performance (ISR
latency) and memory consumption.
Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Major changes:
- move related functions together
- optimize add_map() not to walk the page tables *twice* on
every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails
and make the code clearer overall.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.
Fixes#29936
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.
This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.
For more details, refer to the issue #30517.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:
1. Invalid return address when calling `z_arm_pendsv` from the
exception-specific mode
2. Caller-saved register is referenced after a call to `z_arm_pendsv`
For more details, refer to the issue #31511.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>