The CONFIG_ROM_START_OFFSET is supposed to be added to
the current when linking, instead of having the current
address set to it. So fix that.
Not sure why it worked up to this point, but llvm/clang/lld
complained that it could not move location counter backward.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Introduce an optional hook to be called when the CPU is made idle.
If needed, this hook can be used to prevent the CPU from actually
entering sleep by skipping the WFE/WFI instruction.
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
Looks like some implementors decided not to implement the full set of
PMP range matching modes. Let's rearrange the code so that any of those
modes can be disabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Let's honor CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT even for kernel
stacks. This saves one global PMP slot when creating the guard area for
the IRQ stack, and some hw implementations might require that anyway.
With this changes, arch_mem_domain_max_partitions_get() becomes much
more reliable and tests/kernel/mem_protect is more likely to pass even
with the stack guard enabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Additional privileged stack space is used by peripheral emulators when
userspace is enabled. This is largely due to additional function calls and
data structures allocated on the stack. This can potentially lead to stack
smashing if the privileged stack size isn't high enough, causing an
exception.
Increase the privileged stack space when userspace and peripheral emulation
are enabled.
Signed-off-by: Aaron Massey <aaronmassey@google.com>
When CONFIG_SOC_ISR_SW_UNSTACKING is defined, it's up to the custom soc
code to remove the ESF, because the software-managed part of the ESF is
depending on the hardware. Fix this in the ISR code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Some implementations may not capture the faulting instruction in mtval
and set it to zero when an illegal instruction fault is raised This is
notably the case with QEMU version 7.0.0 when a CSR instruction is
involved.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The FRCSR, FSCSR, FRRM, FSRM, FSRMI, FRFLAGS, FSFLAGS and FSFLAGSI
are in fact CSR instructions targeting the fcsr, frm and fflags
registers. They should be caught as FPU instructions as well.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
- IRQ state for the interrupted context corresponds to the PIE bit not
the IE bit.
- Restoring the saved FPU state should clear the entire field before
or'ing wanted bits in.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
For RISC-V, the reg property of a cpu node in the devicetree describes
the low level unique ID of each hart. Using devicetree macro's, a list
of all cpus with status "okay" can be generated.
Using devicetree overlays, a hart or multiple harts can be marked as
"disabled", thus excluding them from the list. This allows platforms
that have non-zero indexed SMP capable harts to be functionally mapped
to Zephyr's sequential CPU numbering scheme.
On kernel init, if the application has MP_MAX_NUM_CPUS greater than 1,
generate the list of cpu nodes from the device tree with status "okay"
and map the unique hartid's to zephyr cpu's
While we are at it, as the hartid is the value that gets passed to
z_riscv_secondary_cpu_init, use that as the variable name instead of
cpu_num
Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
RISC-V multi-hart systems that employ a heterogeneous core complex are
not guaranteed to have the smp capable harts starting with a unique id
of zero, matching Zephyr's sequential zero indexed cpu numbering scheme.
Add option, RV_BOOT_HART to choose the hart to boot from.
On reset, check the current hart equals RV_BOOT_HART: if so, boot first
core. else, loop in the boot secondary core and wait to be brought up.
For multi-hart systems that are not running a Zephyr mp or smp
application, park the non zephyr related harts in a wfi loop.
Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
Add an option to generate simplified error codes instead of more
specific architecture specific error codes. Enable this by default in
tests to make exception tests more generic across hardware.
Fixes#54053.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Disables allowing the python argparse library from automatically
shortening command line arguments, this prevents issues whereby
a new command is added and code that wrongly uses the shortened
command of an existing argument which is the same as the new
command being added will silently change script behaviour.
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
Commit 4f9b547ebd ("riscv: smp: prepare for more than one IPI type")
didn't clear pending IPI flags.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We can leverage the FPU dirty state as an indicator for preemptively
reloading the FPU content when a thread that did use the FPU before
being scheduled out is scheduled back in. This avoids the FPU access
trap overhead when switching between multiple threads with heavy FPU
usage.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
FPU context switching is always performed on demand through the FPU
access exception handler. Actual task switching only grants or denies
FPU access depending on the current FPU owner.
Because RISC-V doesn't have a dedicated FPU access exception, we must
catch the Illegal Instruction exception and look for actual FP opcodes.
There is no longer a need to allocate FPU storage on the stack for every
exception making esf smaller and stack overflows less likely.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Instead of saving/restoring FPU content on every exception and task
switch, this replaces FPU sharing support with a "lazy" (on-demand)
context switching algorithm similar to the one used on ARM64.
Every thread starts with FPU access disabled. On the first access the
FPU trap is invoked to:
- flush the FPU content to the previous thread's memory storage;
- restore the current thread's FPU content from memory.
When a thread loads its data in the FPU, it becomes the FPU owner.
FPU content is preserved across task switching, however FPU access is
either allowed if the new thread is the FPU owner, or denied otherwise.
A thread may claim FPU ownership only through the FPU trap. This way,
threads that don't use the FPU won't force an FPU context switch.
If only one running thread uses the FPU, there will be no FPU context
switching to do at all.
It is possible to do FP accesses in ISRs and syscalls. This is not the
norm though, so the same principle is applied here, although exception
contexts may not own the FPU. When they access the FPU, the FPU content
is flushed and the exception context is granted FPU access for the
duration of the exception. Nested IRQs are disallowed in that case to
dispense with the need to save and restore exception's FPU context data.
This is the core implementation only to ease reviewing. It is not yet
hooked into the build.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Right now this is hardcoded to z_sched_ipi(). Make it so that other IPI
services can be added in the future.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If running under Xtensa simulator, it is good to tell simulator
to stop execution once we reach double exception, as the current
double exception handler is simply an endless loop. If we turn
on tracing in the simulator, the output file would contain
an infinite iteration of this endless loop, and the simulator
needs to be stopped manually before the file size goes out of
control. So we need to tell the simulator to stop once
we reach this point instead of doing an endless loop.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Given the Zephyr CPU number is no longer tied to the hartid, we must
consider the actual hartid when sending an IPI to a given CPU. Since
those hartids can be anything, let's just save them in the cpu structure
as each CPU is brought online.
While at it, throw in some `get_hart_msip()` cleanups.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Currently it is assumed that Zephyr CPU numbers match their hartid
value one for one. This assumption was relied upon to efficiently
retrieve the current CPU's `struct _cpu` pointer.
People are starting to have systems with a mix of different usage for
each CPU and such assumption may no longer be true.
Let's completely decouple the hartid from the Zephyr CPU number by
stuffing each CPU's `struct _cpu` pointer in their respective scratch
register instead. `arch_curr_cpu()` becomes more efficient as well.
Since the scratch register was previously used to store userspace's
exception stack pointer, that is now moved into `struct _cpu_arch`
which implied minor user space entry code cleanup and rationalization.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Ultil now Cortex A/R aarch32 implementation for context
switching expects that interrupts was disabled. This is
true if a context switching happens at thread context.
But if a context switching happens at last action during
interrupt context, this assumption is not true because the
interrupts are still enabled (to allow nesting interrupts).
Disable interrupts at the last interrupt action to ensure
the interrupts are always disabled before context switching
is processed
Signed-off-by: Dat Nguyen Duy <dat.nguyenduy@nxp.com>
In platforms where the linker is capable of doing global optimizations,
like relaxing address mode and synthesize new instructions, Zephyr has to
disable it when enabling USERSPACE since the build expects that address
don't change after the first stage build.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This commit updates all in-tree code to use `CONFIG_CPP` instead of
`CONFIG_CPLUSPLUS`, which is now deprecated.
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
Return specific fault reasons instead of the generic
`K_ERR_CPU_EXCEPTION`, which provides minimal debugging aid.
Fixes#53093.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This patchset is fixing two things:
1. The proper sys_* functions are used for cache mainteinance
operations.
2. To check the status of the L1 cache the SCB registers are probed so
the code is assuming a core architecture cache is present, thus make
the code conditionally compiled on CONFIG_ARCH_CACHE.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
arch_dcache_range() function does not exist anymore, nor K_CACHE_WB
macro. Removing it entirely.
arch_dcache_flush_range() signature changed, so relevantly applying
these.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
RISC-V has a modular design. Some hardware with a custom interrupt
controller needs a bit more work to lock / unlock IRQs.
Account for this hardware by introducing a set of new
z_soc_irq_* functions that can override the default behaviour.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
esf_get_sp is used only when CONFIG_THREAD_STACK_INFO is enabled.
Just move this function around to be inside an ifdef guard.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Some RISC-V SoCs implement a mechanism for hardware supported stacking /
unstacking of registers during ISR / exceptions. What happens is that on
ISR / exception entry part of the context is automatically saved by the
hardware on the stack without software intervention, and the same part
of the context is restored by the hardware usually on mret.
This is currently not yet supported by Zephyr, where the full context
must be saved by software in the full fledged ESF. This patcheset is
trying to address exactly this case.
At least three things are needed to support in a general fashion this
problem: (1) a way to store in software only the part of the ESF not
already stacked by hardware, (2) a way to restore in software only the
part of the context that is not going to be restored by hardware and (3)
a way to define a custom ESF.
Point (3) is important because the full ESF frame is now composed by a
custom part depending on the hardware (that can choose which register to
stack / unstack and the order they are saved onto the stack) and a part
defined in software for the remaining part of the context.
In this patch a new CONFIG_RISCV_SOC_HAS_ISR_STACKING is introduced that
enables the code path supporting the three points by the mean of three
macros that must be implemented by the user in a soc_stacking.h file:
SOC_ISR_SW_STACKING, SOC_ISR_SW_UNSTACKING and SOC_ISR_STACKING_ESF
(refer to the symbol help for more details).
This is an example of soc_isr_stacking.h for an hardware that doesn't do
any hardware stacking / unstacking but everything is managed in
software:
#ifndef __SOC_ISR_STACKING
#define __SOC_ISR_STACKING
#if !defined(_ASMLANGUAGE)
#define SOC_ISR_STACKING_ESF_DECLARE \
struct __esf { \
unsigned long ra; \
unsigned long t0; \
unsigned long t1; \
unsigned long t2; \
unsigned long t3; \
unsigned long t4; \
unsigned long t5; \
unsigned long t6; \
unsigned long a0; \
unsigned long a1; \
unsigned long a2; \
unsigned long a3; \
unsigned long a4; \
unsigned long a5; \
unsigned long a6; \
unsigned long a7; \
unsigned long mepc; \
unsigned long mstatus; \
unsigned long s0; \
} __aligned(16)
#else
#define SOC_ISR_SW_STACKING \
addi sp, sp, -__z_arch_esf_t_SIZEOF; \
DO_CALLER_SAVED(sr);
#define SOC_ISR_SW_UNSTACKING \
DO_CALLER_SAVED(lr);
#endif /* _ASMLANGUAGE */
#endif /* __SOC_ISR_STACKING */
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Debug monitor needs to be configured to a low priority in order to be
useful for debugging (to prioritize other interrupts when waiting on a
breakpoint).
Added a config that configures the interrupt this way.
Signed-off-by: Piotr Jasiński <piotr.jasinski@nordicsemi.no>
It seems that currently it's impossible to create a custom
implementation for debug monitor exception without updating the vector
table (z_arm_debug_monitor maps to fault).
My proposition is to make this symbol weak, so that it can be overriden.
Signed-off-by: Piotr Jasiński <piotr.jasinski@nordicsemi.no>
Save FP user register and FP register file during context switch.
This change enables shared FP registers mode using CONFIG_FPU_SHARING.
Since there is no lazy stacking, the FPU registers will be saved regardless
of whether floating point calculations are performed in the threads when
CONFIG_FPU_SHARING is enabled. This require 72 additional bytes in the
stack memory.
Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
Use interrupts (with dedicated interrupt line) for irq_offload
instead of exception-based implementation.
That allows to implement IRQ_OFFLOAD without adding special code
to interrupt / exception path for IRQ_OFFLOAD handling and,
moreover, test the real interrupt code.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
add DSP reg in context switch
add AGU reg in context switch to support XY mem
add thread option and API to dis/enable DSP switch
Signed-off-by: Siyuan Cheng <siyuanc@synopsys.com>
There is the possibility that when reconfiguring the static regions,
some data that must be accessed is temporarily not accesible due to the
change on the MPU regions configuration. Workaround by disabling MPU
when doing the reconfiguration, same as with dynamic regions, until BR
can be enabled.
Signed-off-by: Duong Vu Nam <duong.vunam@nxp.com>
Config NOCACHE_MEMORY depend on ARCH_HAS_NOCACHE_MEMORY_SUPPORT. Enable
ARCH_HAS_NOCACHE_MEMORY_SUPPORT for Cortex-R52 to run NXP S32Z/E with
nocache attibute.
Enable nocache in each driver use it.
Signed-off-by: Duong Vu Nam <duong.vunam@nxp.com>
Support I/D cache for Cortex-R52 to run with cache on NXP S32Z/E.
Make sure no data is present in the D-Cache before initializing mpu
Signed-off-by: Duong Vu Nam <duong.vunam@nxp.com>
The cache operations must be quick, optimized and possibly inlined. The
current API is clunky, functions are not inlined and passing parameters
around that are basically always known at compile time.
In this patch we rework the cache functions to allow us to get rid of
useless parameters and make inlining easier.
In particular this changeset is doing three things:
1. `CONFIG_HAS_ARCH_CACHE` is now `CONFIG_ARCH_CACHE` and
`CONFIG_HAS_EXTERNAL_CACHE` is now `CONFIG_EXTERNAL_CACHE`
2. The cache API has been reworked.
3. Comments are added.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
As we start to use _st32_huge_offset in other places (i.e DSP code)
introduce optimized version with address shifted instruction which
doesn't produce extra instruction.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Control shared interrupts enabling/disabling via IDU.
With that we can easily enable and disable them for all cores
in one place.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Make ARC_MP_PRIMARY_CPU_ID definition public so it can be used in
other ARC code.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
In 0.15.2 SDK we specify r26 as default register for TLS cached
pointer, so it isn't used by compiler even if TLS support isn't
enabled. Restore the previous behavior - so if we don't use
TLS in Zephyr the register isn't stay reserved.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
The code in prep_c sets VBAR to relocate vector from 0x0, assuming the
low vector bit in SCTLR to be clear. This isn't the case on all
hardware, so set it explicitly to support those.
Signed-off-by: Théophile Ranquet <theophile.ranquet@gmail.com>
1. this header is no use for asm
2. if use xclib, this header include xclib stdbool, and expand to typedef
Signed-off-by: honglin leng <a909204013@gmail.com>
Arm provides a default address map defining default behaviors for
certain address ranges, which can be overlayed with additional regions
in the MPU. Users may also turn off this background map, so that only
regions explicitly programmed in the MPU are allowed.
This provides a Kconfig so that platforms using a non-standard address
map may disable the background address map and provide their own
explicit MPU regions.
Signed-off-by: Benjamin Gwin <bgwin@google.com>
This is a follow-up to commit f400c94adf.
Fix typos in names of introduced macros (*STR -> *SR) and cast their
values to uint32_t to avoid warnings reported for messages formatted
with %x.
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
The compiler defines __ARC_TLS_REGNO__ as the number of the
register used for TLS variables. Use that instead of hard-coding
a specific register.
Signed-off-by: Keith Packard <keithp@keithp.com>
This commit updates the `_posix_zephyr_main` declaration to use the
return type of `int` instead of `void` when `CONFIG_CPP_MAIN=y` (i.e.
C++-compliant main() support is enabled) so that Zephyr applications
defining their main() in a C++ source file can make use of the proper
main() definition of `int main(void)` as required by the C++ standard.
Note that the forward declaration of `_posix_zephyr_main` is required
if and only if the main() is defined in a C++ source file (i.e. when
`CONFIG_CPP_MAIN=y`).
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
Besides adding ARCH_HAS_CODE_DATA_RELOCATION, this patch also adds
support for the "sample_controller" SoC (used by qemu_xtensa) as
demonstration.
As Xtensa lacks a common linker script at the arch level, enabling it
for each platform will be a piecemeal effort. This patch adds it to the
`soc/xtensa/sample_controller` SoC. Basically, default RAMABLE_REGION is
set to be called "RAM", and hooks are inserted so that
gen_relocate_app.py can add the relevant linker bits.
Also, `tests/application_developent/code_relocation` was tweaked to
support the qemu_xtensa platform. Basically, add the relevant linker
script and ensure that relevant memory regions have their program header
(PHDR) associated.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
The is code duplication as one is in C, and the other is an assembly
macro. As there is no easy way to find out about this duplication,
adding a comment seems the best way to go.
Signed-off-by: Henri Xavier <datacomos@huawei.com>
In preparation for using CONFIG_*_ENDIAN instead of __BYTE_ORDER__, add
a reflection of CONFIG_BIG_ENDIAN that will allow users to conditionally
compile with #ifdef instead of #ifndef while keeping the default case
(little endian) as the first block in the conditional compilation.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The buffer contents returned from arch_gdb_reg_readone is a counted array
of bytes, not a C string. Use memcpy instead of strcpy for the failure
return path to avoid compiler warning about missing NUL termination.
Signed-off-by: Keith Packard <keithp@keithp.com>
This reverts commit a7b5d606c7.
The assumption behind that commit was wrong. The software-based stack
sentinel writes to the very bottom of the _writable_ stack area i.e.
right next to the actual PMP based guard area. So they are compatible.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Move the FS_BASE MSR code to to the top of __resume to ensure
that %fs relative addressing run in the thread switching hook
works.
Signed-off-by: Keith Packard <keithp@keithp.com>
Change automated searching for files using "IRQ_CONNECT()" API not
including <zephyr/irq.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Some headers made use of types defined in sys_clock.h (e.g. k_timeout_t)
without including it.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There seems to be an unresolved dependency chain in some x86 arch
headers, this file needs kernel.h to be included before arch_data/func.
This patch is a workaround, but problem should be fixed properly at some
point.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The CONFIG_STACK_SENTINEL adds 4 bytes to the stack. Take these
into account for CONFIG_ARCH_POSIX_RECOMMENDED_STACK_SIZE.
Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
It's useful for RAMABLE_REGION to have a uniform name when
CODE_DATA_RELOCATION is supported, because otherwise the build system
needs to be aware of how the region name differs between architectures.
Since architectures tend to prefer one of 'SRAM' or 'RAM' for that
region, prefer to use 'RAM' as the more general term.
Signed-off-by: Peter Marheine <pmarheine@chromium.org>
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.
CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.
Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.
Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The software-based stack sentinel writes to the very bottom of the
stack area triggering the PMP stack protection. Obviously they can't
be used together.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The IRQ stack in particular is different on each CPU, and so is its
stack guard PMP entry value. This creates 2 issues:
- The assertion ensuring the last global PMP address is the same
for each CPU does fail;
- That last global PMP address can't be relied upon to create a
single-slot per-thread TOR mapping.
Fix both issues by not remembering the actual address for the last
global entry but a dummy address instead that is guaranteed not to
match any opportunistic single-slot TOR mapping.
While at it, lock that IRQ stack guard PMP entry.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
GCC may generate ldp/stp instructions with the Advanced SIMD Qn
registers for consecutive 32-byte loads and stores.
This commit disables this GCC behaviour because saving and restoring
the Advanced SIMD context is very expensive, and it is preferable to
keep it turned off by not emitting these instructions for better
context switching performance.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Z_THREAD_STACK_BUFFER() must not be used here. This is meant for stacks
defined with K_THREAD_STACK_ARRAY_DEFINE() whereas in this case we are
given a stack created with K_KERNEL_STACK_ARRAY_DEFINE().
If CONFIG_USERSPACE=y then K_THREAD_STACK_RESERVED gets defined with
a bigger value than K_KERNEL_STACK_RESERVED. Then Z_THREAD_STACK_BUFFER()
returns a pointer that is more advanced than expected, resulting in a
stack pointer outside its actual stack area and therefore memory
corruption ensues.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The parameter ssf of the handler_bad_syscall got null pointer
due to that the R1 does not push into the stack in a right
order on cortex-M0. Adjust the pushing order of stack to make
the ssf being passed correctly.
Fixes#50146.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
It turns out that SOF is already using a symbol named
"zephyr_app_main()", so this produces a collision. Pick something
that looks more relevant to "posix", and put an underscore on it (it's
a "system" symbol, after all).
Signed-off-by: Andy Ross <andyross@google.com>
It seems like libfuzzer wants to relocate 32 bit instrumented code
sections at runtime at addresses different than the ones in the ELF
file. This is problematic, because Zephyr files are compiled
statically and so will crash the first time they try to jump to an
absolute .text address (basically at the first function call after a
fuzzer entry point).
It seems that building with -fPIC is enough to defeat this (we use the
host linker script, which will manage the GOT/PLT entries for us),
which will work as long as the fuzzer isn't playing games with data
other than text. None of this seems to be documented, so... I guess
it's as good as we can get. It works, at least.
(x86_64 binaries don't show the same behavior, they run where they
were linked)
Signed-off-by: Andy Ross <andyross@google.com>
Align backtrace output with the style used in rest of the codespace.
This makes it more convenient to compare the backtrace to e.g. objdump
output.
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
On GICv3, when we send an IPI interrupt, aff3, aff2 and aff1 should
be assigned a value corespond to a PE for which interrupt will be
generated. target_list only corresponds to aff0.
On real hardware, aff3, aff2, aff1 and aff0 should be treated as a
whole to determine a PE.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
VMPIDR_EL2 is assigned the value returned by EL2 reads of MPIDR_EL1
MPIDR_EL1 is the register holding the Multiprocessor ID which is to
identify different cores. Because of the virtualization requirements
for AArch64, MPIDR_EL1 should be virtualized (the different virtualized
cores can run on the same physical core). Thus the value of MPIDR_EL1
should be switched when the VM is switched. Setting the VMPIDR_EL2 is
the way to change the value returned by EL1 reads of MPIDR_EL1. Even
without virtualization, we still need to set VMPIDR_EL2 during booting
at EL2 or EL3. Otherwise, all cores' IDs are zero at the EL1 stage
which will break the SMP system.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
This commit is fixing placing the vectors section through
zephyr_linker_sources(ROM_START ...) (as done in the ARM
architecture port) so its order can be adjusted by SORT_KEY.
Fixes#49903
Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
The Xtensa arch has historically had state/user register accessor
macros with bare three-byte symbol names. I think this might have
been in the original Cadence-contributed arch integration, but I'm not
sure. In any case they also exist in the same names in vendor
HAL/toolchain code and are causing collisions. We never should have
had these symbols exposed in our header.
Put them under an XTENSA_ prefix to decollide.
Signed-off-by: Andy Ross <andyross@google.com>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.
The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.
NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Convert the device to be Devicetree based. Adjusted tests and other
areas that were using old Kconfig properties.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The "framebuf" driver was an incomplete driver expecting _clients_ to
implement missing functionality (i.e. init and device definition)
outside of the driver. This pattern of scattering driver code throughout
the tree is not common (if used at all). If certain drivers share
functionality, one can create a common module within the subsystem (see
e.g. ILI9XXX drivers).
The _generic_ framebuffer code was only used to implement the Intel
Multiboot framebuffer driver. This patch centralizes all the scattered
code in the subsystem and adjusts the driver name to "intel_multibootfb"
to make things clear. If there's ever another framebuffer driver that
shares code, it can be split into multiple modules.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Two issues:
- A unnecessary parentheses pair caused rounding errors (by truncating
a small value before multiplying it).
- arch_timing_cycles_to_ns_avg() wasn't actually converting the result
to nanoseconds.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
This commit renames the ARC64 output format from `elf64-littlearc` to
`elf64-littlearc64` as required by the updated ARC patches for the GCC
12.1 release.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This PR allows the user to add symbols to the nocache
section. The use for this could be as follows
zephyr_linker_sources_ifdef(CONFIG_NOCACHE_MEMORY
NOCACHE_SECTION
nocache.ld
)
nocache.ld (as shown below) can define additional
symbols to go into the nocache section
. = ALIGN(4);
KEEP(*(NonCacheable))
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
Add support for LLVM's libfuzzer utility. This works by building an
executable with a "LLVMFuzzerTestOneInput()" entry point (which is
external to Zephyr, running in the host process environment!), which
it drives out of its own main() routine. The toolchain API is exposed
as just another sanitizer variant, which is clean.
Signed-off-by: Andy Ross <andyross@google.com>
GCC 12 performs bounds checking on the pointer arguments specified like
an array (e.g. `int arg[]`) and treats such arguments with an empty
length as having the length of 0, resulting in the compiler printing
out `stringop-overread' warning when they are accessed.
This commit corrects any pointer arguments declared using the array
expression to use the pointer expression instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This implements support for relocating code to chosen memory regions via
the `zephyr_code_relocate` CMake function for RISC-V SoCs. ARM-specific
assumptions that were made by gen_relocate_app.py need to be corrected,
in particular not assuming any particular name for the default RAM
section (which is 'SRAM' for most ARM pltaforms) and not assuming 32-bit
pointers (so the test works on RV64).
Signed-off-by: Peter Marheine <pmarheine@chromium.org>
Support for CODE_DATA_RELOCATION is not inherently limited to ARM, so
move the Kconfig definition to top-level so it can be used by other
architectures. Since support is opt-in (requiring linker script
support), add a helper symbol enabled by architecture config that gates
whether CODE_DATA_RELOCATION is available instead of listing all
supported systems inline.
Signed-off-by: Peter Marheine <pmarheine@chromium.org>
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:
- the location of the cache handlers is preventing the linker to
actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
implemented (when userspace is not used the compiler simply optimizes
out these calls)
Fix the problem by:
- moving the userspace cache handlers to a their logical and proper
location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Some platforms have the possibility to cancel the powering off until the
very latest moment (for example if an IRQ is received). Deal with this
kind of failures.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In case of ARCv3 64 bit we have only one 64bit accumulator
register instead of register pair, so fixup register
save & restore code.
While we at it also make ARC_HAS_ACCL_REGS option (which
controls accumulator reg/regs save & restore) default
for HS5x and HS6x as well - as it should be.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
It is frequent to find variable definitions like this:
```c
static const struct device *dev = DEVICE_DT_GET(...)
```
That is, module level variables that are statically initialized with a
device reference. Such value is, in most cases, never changed meaning
the variable can also be declared as const (immutable). This patch
constifies all such cases.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Wire this up the same way ASAN works. Right now it's support only by
recent clang versions (not gcc), and only in 64 bit mode. But it's
capable of detecting uninitialized data reads, which ASAN is not.
This support is wired into the sys_heap (and thus k_heap/k_malloc)
layers, allowing detection of heap misuse like use-after-free. Note
that there is one false negative lurking: due to complexity, in the
case where a sys_heap_realloc() call is able to shrink memory in
place, the now-unused suffix is not marked uninitialized immediately,
making it impossible to detect use-after-free of those particular
bytes. But the system will recover cleanly the next time the memory
gets allocated.
Also no attempt was made to integrate this handling into the newlib or
picolibc allocators, though that should hopefully be possible via
similar means.
Signed-off-by: Andy Ross <andyross@google.com>
This had bitrotten a bit, and didn't build as shipped. Current
libasan implementations want -fsanitize=address passed as a linker
argument too. We have grown a "lld" linker variant that needs the
same cmake treatment as the "ld" binutils one, but never got it. But
the various flags had been cut/pasted around to different places, with
slightly different forms. That's really sort of a mess, as sanitizer
support was only ever support with host toolchains for native_posix
(and AFAICT no one anywhere has made this work on cross compilers in
an embedded environment). And the separate "gcc" vs. "llvm" layers
were silly, as there has only ever been one API for this feature (from
LLVM, then picked up compatibly by gcc).
Pull this stuff out and just do it in one place in the posix arch for
simplicity.
Also recent sanitizers are trying to add instrumentation padding
around data that we use linker trickery to pack tightly
(c.f. SYS_INIT, STRUCT_SECTION_ITERABLE) and we need a way
("__noasan") to turn that off. Actually for gcc, it was enough to
just make the records const (already true for most of them, except a
native_posix init struct), but clang apparently isn't smart enough.
Finally, add an ASAN_RECOVER kconfig that enables the use of
"halt_on_error=0" in $ASAN_OPTIONS, which continues execution past the
first error.
Signed-off-by: Andy Ross <andyross@google.com>
Changing $(ARCH_DIR)/common/Kconfig to arch/common/Kconfig.
The use of ARCH_DIR at this place is wrong, as it suddenly requires out
of tree archs to support a common/Kconfig file, which may make no sense
to them.
If an out of tree arch wants to place common Kconfig code in a common
Kconfig file, that's their choice and they should source such file
themselves.
Instead just source the Zephyr arch common file directly.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
We have now:
- CPU_HAS_{D,I}CACHE: when the CPU has support for d-cache and i-cache
- {D,I}CACHE: to enable / disable d-cache and i-cache
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
A Cortex-M BusFault often arises from the execution of a function
pointer that got corrupted.
The Zephyr Cortex-M fault handler de-references the `$pc` in
`z_arm_is_synchronous_svc()` to determine if the fault was due to a
kernel oops (ARCH_EXCEPT). This can cause a BusFault if the pc itself
was corrupt. A BusFault from a HardFault will trigger ARM Cortex-M
"Lockup" preventing the Zephyr fault handler from running to
completion. This in turn, results in no fault handling information
getting dumped by the Zephyr fault handler.
To fix the issue, we can simply set the `CCR.BFHFNMIGN` bit prior to
the instruction address dereference which will cause the processor to
ignore the BusFault and return a value of 0x0 instead of entering
lockup. After the operation is complete, we clear `CCR.BFHFNMIGN` as
it would be unexpected for any other code in the fault handler to
trigger a fault.
The issue can be reproduced programmatically with:
```
void (*unaligned_func)(void) = (void (*)(void))0x50000001;
unaligned_func();
```
I bumped into this problem while debugging an issue on the nRF9160DK
(`west build --board nrf9160dk_nrf9160ns`) and confirmed that after
making this change I now see the full fault handler print:
```
[00:00:45.582,214] <err> os: Exception occurred in Secure State
[00:00:45.582,244] <err> os: ***** HARD FAULT *****
[...]
[00:00:45.583,984] <err> os: Current thread: 0x2000d340 (shell_uart)
[00:00:45.829,498] <err> fatal_error: Resetting system
```
Signed-off-by: Chris Coleman <chris@memfault.com>
Allow enabling FPU with TF-M with the following limitations:
- Only IPC mode is supported by TF-M.
- Disallow FPU hard ABI when building the NS application, the TF-M build
system does not pass the flags correctly to all dependencies.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
QEMU requires that the semihosting trap instruction sequence, which
consists of three uncompressed instructions, lie in the same page, and
refuses to interpret the trap sequence if these instructions are placed
across two different pages.
This commit adds 16-byte alignment requirement to the `semihost_exec`
function, which occupies 12 bytes, to ensure that the three trap
sequence instructions in this function are never placed across two
different pages.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
All SOC_ERET definitions expand to the mret instruction (used to return
from a trap: exception or interruption). The 'eret' instruction existed
in previous RISC-V privileged specs, but it doesn't seem to be used in
Zephyr (ref. RISC-V Privileged Architectures 3.2.2).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Some processors support Dual-redundant Core Lock-step
DCLS) topology but the processor still can be ran in
split-lock mode (by default or changed at flash time).
So, introduce config DCLS that is enabled by default if
config CPU_HAS_DCLS is set, it should be disabled if
processor is used in split-lock mode.
Signed-off-by: Dat Nguyen Duy <dat.nguyenduy@nxp.com>
ICI (Inter-Core Interrupt Unit) interrupts and priorities were hardcoded
in C files. This patch moves this information to Devicetree and updates
code to make use of it.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Transfer the entry point and initial parameters in the callee_saved
struct rather than on the stack. This saves 48 byte stack per thread
and simplifies the logic.
Signed-off-by: Julius Barendt <julius.barendt@gaisler.com>
Execute data and instruction sync barriers after writing to SCTLR
to disable the MPU, to ensure the registers are set before
proceeding and that the new changes are seen by the instructions
that follow.
Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
Execute data and instruction sync barriers after writing to SCTLR
to enable the MPU, to ensure the registers are set before
proceeding and that the new changes are seen by the instructions
that follow.
Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
The simulator seems to drop garbage addresses (somewhere in the ROM it
looks like) into this SR at arbitrary times. I don't know if this is
a hardware exception handler that we can't turn off, or a simulator
bug, or what. But our code that assumes it will be cleared to zero or
valid is breaking. Set it every time in every context switch for now
pending someone figuring out what's going wrong.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
When compiling OpenAMP with Zephyr Cache Management, undefined references
are listed for all functions called with in the cache management
Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
Move those defines and values back to headers. Kconfig is not a good
place for this, later this should move to DTS.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
Use comparisons with NULL instead of implicitly testing pointers.
Use comparisons with NUL instead of implicitly testing plain chars.
This commit is a subset of the original auditable-branch commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
In interrupt chandler code we don't save full current task context
on stack (we don't save callee regs) before z_get_next_switch_handle()
call, but we passing _current to it, so z_get_next_switch_handle
saves current task to switch_handle, which means that this CPU
current task can be picked by other CPU before we fully store it
context on this CPU.
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Any project with Kconfig option CONFIG_LEGACY_INCLUDE_PATH set to n
couldn't be built because some files were missing zephyr/ prefix in
includes
Re-run the migrate_includes.py script to fix all legacy include paths
Signed-off-by: Tomislav Milkovic <milkovic@byte-lab.com>
The use of spsr_hyp is "UNPREDICTABLE" for the ARM Cortex-R52.
Some implements choose to implement the behavior, but it
should not be assumed.
Fixes#47330
Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
We can use definitions provided by "standard CMSIS" to access
MEMFAULT/BUSFAULT/USGFAULT fields in CFSR.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci
Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
Move scripts needed by the build system and not designed to be run
individually or standalone into the build subfolder.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Quoting from the SiFive Interrupt Cookbook [0]
CLIC vectored mode has a similar concept to CLINT vectored mode, where
an interrupt vector table is used for specific interrupts. However, in
CLIC vectored mode, the handler table contains the address of the
interrupt handler instead of an opcode containing a jump instruction.
When an interrupt occurs in CLIC vectored mode, the address of the
handler entry from the vector table is loaded and then jumped to in
hardware
So, when CLIC is present we must use IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS
instead of IRQ_VECTOR_TABLE_JUMP_BY_CODE.
[0] https://starfivetech.com/uploads/sifive-interrupt-cookbook-v1p2.pdf
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This commit adds icache and dcache maintenance functions
for aarch32.
Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>
Signed-off-by: Dave Aldridge <quic_daldridg@quicinc.com>
Add a new API used by arch to implement suspend-to-RAM (S2RAM).
The API is composed by a single function to save the CPU context on
suspend.
A CPU context is the arch-specific set of registers that must be
preserved on power-off (in retained RAM) to be able to resume the
execution from the point it was suspended without going through the
whole kernel startup stage.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Buffer size must be decreased by one when non-zero to calculate the
right end address, and this must be checked for overflows.
Variables for region limit renamed for clarity since they may be
understood as the raw register values.
Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
ARMv8-R aarch32 processor has support for
ARM PMSAv8-32. To add support for ARMv8-R we reuse the
ARMv8-M effort and change access to the different registers
such as rbar, rlar, mair, prselr.
Signed-off-by: Julien Massot <julien.massot@iot.bzh>
Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
When CONFIG_INIT_STACKS is enabled all stacks should be filled with 0xaa
so that the thread analyzer can measure stack utilization, but the IRQ
stack was not filled and so `kernel stacks` on the shell would show that
the stack had been fully used and inferring an IRQ stack overflow
regardless of the IRQ stack size.
Fill the IRQ stack before it gets used so that we can have precise usage
reports.
Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>
Signed-off-by: Dave Aldridge <quic_daldridg@quicinc.com>
There is no reason to have this script in a different place than all the
other python scripts. Move it.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For vectored interrupts use the generated IRQ vector table instead of
relying on a custom-generated table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The whole mechanism of IRQ table generation is build around the
assumption that the IRQ vector table contains an array of addresses the
PC will be assigned to when the corresponding interrupt is triggered.
While this is correct for the majority of architectures (ARM, RISCV with
CLIC in vectored mode, etc...) this is not valid in general (for example
RISCV with CLINT/HLINT in vectored mode).
In this alternative format for the IRQ vector table, the pc will get
assigned by the hardware to the address of the vector table index
corresponding to the interrupt ID. From the vector table index, a
subsequent jump will occur from there to service the interrupt.
This means that the IRQ vector table contains an opcode that is a jump
instruction to a specific location instead of the address of the
location itself.
This patch is introducing support for this alternative IRQ vector table
format. The user can now select one format or the other one by acting on
IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS or IRQ_VECTOR_TABLE_JUMP_BY_CODE
Kconfig symbols.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Removes the ability to enable the FPU with TF-M -- added in
PR #45906, and which is causing CI failures -- until a more
robust solution can be implemented for FPU support w/TF-M.
Signed-off-by: Kevin Townsend <kevin.townsend@linaro.org>
Following zephyr's style guideline, all if statements, including single
line statements shall have braces.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Following zephyr's style guideline, all if statements, including single
line statements shall have braces.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The EFI console output call return with interrput enabled, it is a
firmware bug. And there was a solution that disabled interrupt it
return right away. But in some case the interrupt could happen
during the efi call context. If an interrupt was handled, a printk
call again will make it re-entried, or a swap might be happens.
This is suggested solution appiled for EFI console output:
1. Skip printk call when it is called in interrupt context.
2. Disable the schedule during the EFI call window.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
Add a minimal EFI console driver to support printf, this console driver
only supports console output. Otherwise the printf will not work.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
Some early RISC-V SoCs have a problem when an `mret` instruction is used
outside a trap handler.
After the latest Zephyr RISC-V huge rework, the arch_switch code is
indeed calling `mret` when not in handler mode, breaking some early
RISC-V platforms.
Optionally restore the old behavior by adding a new
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL symbol.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Targets with text or data addresses above the 4GB boundary may need to use
the large code model to ensure relocations in the linker work correctly.
Signed-off-by: Keith Packard <keithp@keithp.com>
This is really useful only for one case i.e. when testing against zero.
Do that test inline where it is needed and make the rest of the code
independent from the actual numerical value being tested to make code
maintenance easier if/when new cases are added.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
MISRA C:2012 Rule 21.13 (Any value passed to a function in <ctype.h>
shall be representable as an unsigned char or be the value EOF).
Functions in <ctype.h> have undefined behavior if they are called with
any other value. Callers affected by this change are not prepared to
handle EOF anyway. The addition of these casts avoids the issue
and does not result in any performance penalty.
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
Signed-off-by: Simon Hein <SHein@baumer.com>
Allow the application to enable the FPU when TF-M has been enabled.
Pass the correct compilation flags according to the TF-M integration
guide.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
Enable single-threaded support for the arm64 archtecture.
This mode of execution is supported on an soc under
development and is validated regularly.
Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
In performing a double check of Zephyr arm64 MMU config
against edk2, a different in the programming of the
Translation Control Register (TCR) was found. TCR.TG[1]
should be set to address Cortex-A57 erratum 822227:
"Using unsupported 16K translation granules might cause
Cortex-A57 to incorrectly trigger a domain fault"
Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
By default ARCH_IRQ_VECTOR_TABLE_ALIGN and ARCH_SW_ISR_TABLE_ALIGN are
set to 0. Use a more proper value.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The generation of the software ISR table and the IRQ vector table
(respectively generated by CONFIG_GEN_SW_ISR_TABLE and
CONFIG_GEN_IRQ_VECTOR_TABLE) should (in theory) go through three stages:
1. A placeholder table is generated in arch/common/isr_tables.c and
placed in an orphaned .gnu.linkonce.{irq_vector_table, sw_isr_table}
section
2. The real table is generated by arch/common/gen_isr_tables.py (creating
the build/zephyr/isr_tables.c file)
3. The real table is un-orphaned by moving it in a proper section with a
proper alignment
While all the steps are done automatically for the software ISR table,
for the IRQ vector table each architectures must take care of modiying
its own linker script to place somewhere the generated IRQ vector table
(basically step 3 is missing).
This is currently only done for 2 architectures: Cortex-M (ARMv7) and
ARC. But when another architecture tries to use the IRQ vector table,
the linker complains about that. For example:
Linking C executable zephyr/zephyr.elf
riscv64-zephyr-elf/bin/ld.bfd: warning: orphan section
`.gnu.linkonce.irq_vector_table' from
`zephyr/CMakeFiles/zephyr_final.dir/isr_tables.c.obj' being placed in
section `.gnu.linkonce.irq_vector_table'
In this patch we introduce a new CONFIG_ARCH_IRQ_VECTOR_TABLE_ALIGN to
support the architectures requiring a special alignment for the IRQ
vector table and we also introduce a way to automatically place the IRQ
vector table in place in the same way it is done for the ISR software
table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
On platforms where reset vector catch is not possible
it is useful to have a compile-time option to spin
at the reset vector allowing a debugger to be attached
and then to manually resume execution.
Define a config option for arm64 to spin at the
reset vectdor so a debugger can be attached.
Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
Under no circumstances the generated IRQ vector table can and should
contain NULL values. This is correctly enforced at generation time by
the gen_isr_tables.py script making the existence of the ISR_WRAPPER
define useless.
The enforced behaviour is:
- When the ISR software table exists defaults to _isr_wrapper
- Otherwise defaults to z_irq_spurious
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Retrieve the pmpaddr value matching the last global PMP slot and add it
to the per-thread m-mode and u-mode entry array. Even if that value is
not written out again on thread context switch, that value can still be
used by set_pmp_entry() to attempt a single-slot TOR mapping with it.
Nicely abstract this with the new z_riscv_pmp_thread_init() where the
PMP_M_MODE(thread) and PMP_U_MODE(thread) argument generators can be
used.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Adds compatibility with Intel ADSP GDB from Zephyr SDK and
from Cadence toolchain to coredump_gdbserver.py.
Adds CAVS 15-25 (APL) register definitions. Implements
handle_register_single_read_packet to serve ADSP GDB
p packets.
Prevents BSA from changing between stack dump printout
and coredump by taking lock. Observed to be necessary for
accurate results on slower simulated platforms.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Triggers CPU exception with illegal instruction when z_except_reason
is called (e.g. in k_panic, k_oops). Creates exception stack frame
for use by coredump. Adds unique cause code for ARCH_EXCEPT. Disables
test case failure for qemu_xtensa.
Without an ARCH_EXCEPT implementation, z_except_reason calls
z_fatal_error directly with a null ESF and bypasses
xtensa_excint1_c's error logging. An ESF is required to coredump.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
A QEMU bug may create bad transient PMP representations causing
false access faults to be reported. Work around it by setting
pmp registers to zero from the update start point to the end
before updating them with new values.
The QEMU fix is here with more details about this bug:
https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg02800.html
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This reverts the bulk of commit c8bfc2afda ("riscv: make
arch_is_user_context() SMP compatible") and replaces it with a flag
stored in the thread local storage (TLS) area, therefore making TLS
mandatory for userspace support on RISC-V.
This has many advantages:
- The tp (x4) register is already dedicated by the standard for this
purpose, making TLS support almost free.
- This is very efficient, requiring only a single instruction to clear
and 2 instructions to set.
- This makes the SMP case much more efficient. No need for funky
exception code any longer.
- SMP and non-SMP now use the same implementation making maintenance
easier.
- The is_user_mode variable no longer requires a dedicated PMP mapping
and therefore freeing one PMP slot for other purposes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
5f65dbcc9dab3d39473b05397e05.
The tp (x4) register is neither caller nor callee saved according to
the RISC-V standard calling convention. It only has to be set on thread
context switching and is otherwise read-only.
To protect the kernel against a possible rogue user thread, the tp is
also re-set on exception entry from u-mode.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
MISRA C:2012 Rule 8.2 (Function types shall be in prototype form with
named parameters.)
Added missing parameter names.
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
For some reasons RISCV is the only arch where the vector table entry is
called __irq_wrapper instead of _isr_wrapper. This is not only a
cosmetic change but Zephyr expects the common ISR handler to be called
_isr_wrapper (for example when generating the IRQ vector table).
Change it.
find ./ -type f -exec sed -i 's/__irq_wrapper/_isr_wrapper/g' {} \;
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This commit updates all deprecated `K_KERNEL_STACK_ARRAY_EXTERN` macro
usages to use the `K_KERNEL_STACK_ARRAY_DECLARE` macro instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit corrects all `extern K_THREAD_STACK_DEFINE` macro usages
to use the `K_THREAD_STACK_DECLARE` macro instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit corrects all `extern K_KERNEL_STACK_ARRAY_DEFINE` macro
usages to use the `K_KERNEL_STACK_ARRAY_DECLARE` macro instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
This commit selectively disables the infinite recursion warning
(`-Winfinite-recursion`), which may be reported by GCC 12 and above,
for the `disable_table` function because no actual infinite recursion
will occur under normal circumstances.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
arch_mem_map() on ARM64 is currently not supporting the K_MEM_PERM_USER
parameter so we cannot allocate userspace accessible memory using the
memory helpers. Fix this.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>