Commit graph

4947 commits

Author SHA1 Message Date
Nicolas Pitre abb50e1605 arch/arm64/mmu: fix table discarding code
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.

Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-07 08:33:05 +01:00
Jonas Otto 60b8773491 arch: riscv enable flash config
For RISCV arch, enable FLASH_SIZE and FLASH_BASE_ADDRESS config.
To avoid duplicated work, remove flash config from RISCV soc.

Signed-off-by: Jonas Otto <jonas@jonasotto.com>
2023-02-28 10:29:03 +01:00
Ayan Kumar Halder 958dcf98e8 arch: arm: aarch32: Add ability to generate zImage header
The image header is compatible for zImage(32) protocol.

Offset  Value          Description
0x24    0x016F2818     Magic number to identify ARM Linux zImage
0x28    start address  The address the zImage starts at
0x2C    end address    The address the zImage ends at

As Zephyr can be built with a fixed load address, Xen/Uboot can read
the image header and decide where to copy the Zephyr image.

Also, it is to be noted that for AArch32 A/R, the vector table should
be aligned to 0x20 address. Refer ARM DDI 0487I.a ID081822, G8-9815,
G8.2.168, VBAR, Vector Base Address Register :-
Bits[4:0] = RES0.
For AArch32 M (Refer DDI0553B.v ID16122022, D1.2.269, VTOR, Vector Table
Offset Register), Bits [6:0] = RES0.
As zImage header occupies 0x30 bytes, thus it is necessary to align
the vector table base address to 0x80 (which satisfies both VBAR and
VTOR requirements).

Also, it is to be noted that not all the AArch32 M class have VTOR, thus
ARM_ZIMAGE_HEADER header depends on
CPU_AARCH32_CORTEX_R || CPU_AARCH32_CORTEX_A || CPU_CORTEX_M_HAS_VTOR.
The reason being the processors which does not have VBAR or VTOR, needs
to have exception vector table at a fixed address in the beginning of
ROM (Refer the comment in arch/arm/core/aarch32/cortex_m/CMakeLists.txt)
. They cannot support any headers.

Also, the first instruction in zImage header is to branch to the kernel
start address. This is to support booting in situations where the zImage
header need not be parsed.

In case of Arm v8M, the first two entries in the reset vector should be
"Initial value for the main stack pointer on reset" and "Start address
for the reset handler" (Refer Armv8M DDI0553B.vID16122022, B3.30,
Vector tables).
In case of Armv7M (ARM DDI 0403E. ID021621, B1.5.3 The vector table),
the first entry is "SP_main. This is the reset value of the Main stack
pointer.".
Thus when v7M or v8M starts from reset, it expects to see these values
at the default reset vector location.
See the following text from Armv7M (ARM DDI 0403E. ID021621, B1-526)
"On powerup or reset, the processor uses the entry at offset 0 as the
initial value for SP_main..."

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
2023-02-27 17:34:12 +01:00
George Ruinelli b2512d2f53 arm: Add missing include
Add missing include to prevent `'EINVAL' undeclared` when
using `CONFIG_NULL_POINTER_EXCEPTION_DETECTION_DWT=y`

Signed-off-by: George Ruinelli <caco3@ruinelli.ch>
2023-02-25 07:59:56 -05:00
Peter Mitsis 2ab7286c71 arch: riscv: Remove unused offset symbols
Removes unused offset symbols under the RISCV architecture.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-23 16:44:07 +01:00
Peter Mitsis a9e5038c2b arch: x86: Remove unused offset symbols
Removes unused offset symbols under the x86 architecture.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-23 16:44:07 +01:00
Peter Mitsis 9b61418427 arch: sparc: Remove unused offset symbols
Removes unused offset symbols under the SPARC architecture.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-23 16:44:07 +01:00
Peter Mitsis 66af4f443d arch: posix: Remove unused offset symbols
Removes unused offset symbols under the POSIX architecture.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-23 16:44:07 +01:00
Peter Mitsis 9d83993db0 arch: arm: Remove unused generated offset symbols
Removes unused generated offset symbols under the ARM architecture.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-23 16:44:07 +01:00
Kumar Gala 434ca63e2f arch: arm: limit FP16 support to Cortex-A or Cortex-R
FP16 isn't something that is supported on Cortex-M so limit the
Kconfig feature to Cortex-A or Cortex-R.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2023-02-23 08:48:01 +01:00
Daniel Leung 44628735b8 linker: rom_start_offset: add to address instead of set
The CONFIG_ROM_START_OFFSET is supposed to be added to
the current when linking, instead of having the current
address set to it. So fix that.

Not sure why it worked up to this point, but llvm/clang/lld
complained that it could not move location counter backward.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-02-22 08:46:52 -05:00
Andrzej Głąbek 22b17e490b arch: arm: aarch32: Introduce z_arm_on_enter_cpu_idle() hook
Introduce an optional hook to be called when the CPU is made idle.
If needed, this hook can be used to prevent the CPU from actually
entering sleep by skipping the WFE/WFI instruction.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2023-02-21 15:03:30 +01:00
Nicolas Pitre 3c440af975 riscv: pmp: provision for implementations with partial PMP support
Looks like some implementors decided not to implement the full set of
PMP range matching modes. Let's rearrange the code so that any of those
modes can be disabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-02-20 10:57:11 +01:00
Nicolas Pitre ea34acb62c riscv: stack PMP: fix CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT=y case
Let's honor CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT even for kernel
stacks. This saves one global PMP slot when creating the guard area for
the IRQ stack, and some hw implementations might require that anyway.

With this changes, arch_mem_domain_max_partitions_get() becomes much
more reliable and tests/kernel/mem_protect is more likely to pass even
with the stack guard enabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-02-20 10:57:11 +01:00
Aaron Massey 1ee96f14af arch: Double privileged stack space with emulation
Additional privileged stack space is used by peripheral emulators when
userspace is enabled. This is largely due to additional function calls and
data structures allocated on the stack. This can potentially lead to stack
smashing if the privileged stack size isn't high enough, causing an
exception.

Increase the privileged stack space when userspace and peripheral emulation
are enabled.

Signed-off-by: Aaron Massey <aaronmassey@google.com>
2023-02-19 20:38:38 -05:00
Carlo Caione 61a204c831 riscv: Do not remove ESF when SOC_ISR_SW_UNSTACKING
When CONFIG_SOC_ISR_SW_UNSTACKING is defined, it's up to the custom soc
code to remove the ESF, because the software-managed part of the ESF is
depending on the hardware. Fix this in the ISR code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-02-01 03:52:14 +09:00
Nicolas Pitre 10500f1b41 riscv: cope with MTVAL not updated on illegal instruction faults
Some implementations may not capture the faulting instruction in mtval
and set it to zero when an illegal instruction fault is raised This is
notably the case with QEMU version 7.0.0 when a CSR instruction is
involved.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-30 23:47:36 +00:00
Nicolas Pitre 83f849e00e riscv: FPU trap: catch CSR access to fcsr, frm and fflags
The FRCSR, FSCSR, FRRM, FSRM, FSRMI, FRFLAGS, FSFLAGS and FSFLAGSI
are in fact CSR instructions targeting the fcsr, frm and fflags
registers. They should be caught as FPU instructions as well.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-30 23:47:36 +00:00
Nicolas Pitre b2ffee7fe2 riscv: FPU switching fixes
- IRQ state for the interrupted context corresponds to the PIE bit not
  the IE bit.

- Restoring the saved FPU state should clear the entire field before
  or'ing wanted bits in.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-30 23:47:36 +00:00
Conor Paxton 804aa29f89 riscv: smp: use devicetree to map hartids to zephyr cpus
For RISC-V, the reg property of a cpu node in the devicetree describes
the low level unique ID of each hart. Using devicetree macro's, a list
of all cpus with status "okay" can be generated.

Using devicetree overlays, a hart or multiple harts can be marked as
"disabled", thus excluding them from the list. This allows platforms
that have non-zero indexed SMP capable harts to be functionally mapped
to Zephyr's sequential CPU numbering scheme.

On kernel init, if the application has MP_MAX_NUM_CPUS greater than 1,
generate the list of cpu nodes from the device tree with status "okay"
and  map the unique hartid's to zephyr cpu's

While we are at it, as the hartid is the value that gets passed to
z_riscv_secondary_cpu_init, use that as the variable name instead of
cpu_num

Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
2023-01-30 23:45:35 +00:00
Conor Paxton 02391ed00d riscv: enable booting from non-zero indexed RISC-V hart
RISC-V multi-hart systems that employ a heterogeneous core complex are
not guaranteed to have the smp capable harts starting with a unique id
of zero, matching Zephyr's sequential zero indexed cpu numbering scheme.

Add option, RV_BOOT_HART to choose the hart to boot from.
On reset, check the current hart equals RV_BOOT_HART: if so, boot first
core. else, loop in the boot secondary core and wait to be brought up.

For multi-hart systems that are not running a Zephyr mp or smp
application, park the non zephyr related harts in a wfi loop.

Signed-off-by: Conor Paxton <conor.paxton@microchip.com>
2023-01-30 23:45:35 +00:00
David Reiss fb9cdee02d riscv: Fix whitespace in ISR handler
This one line was using spaces instead of tabs for indentation.

Signed-off-by: David Reiss <dreiss@meta.com>
2023-01-27 19:23:21 +09:00
Jordan Yates a3774fd51a arch: option to generate simplified error codes
Add an option to generate simplified error codes instead of more
specific architecture specific error codes. Enable this by default in
tests to make exception tests more generic across hardware.

Fixes #54053.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2023-01-27 18:09:32 +09:00
Jamie McCrae ec7044437e treewide: Disable automatic argparse argument shortening
Disables allowing the python argparse library from automatically
shortening command line arguments, this prevents issues whereby
a new command is added and code that wrongly uses the shortened
command of an existing argument which is the same as the new
command being added will silently change script behaviour.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2023-01-26 20:12:36 +09:00
Nicolas Pitre 6b9526c09b riscv: properly clear pending IPI flags
Commit 4f9b547ebd ("riscv: smp: prepare for more than one IPI type")
didn't clear pending IPI flags.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-25 19:51:03 -05:00
Nicolas Pitre a211970b42 riscv: improve contended FPU switching
We can leverage the FPU dirty state as an indicator for preemptively
reloading the FPU content when a thread that did use the FPU before
being scheduled out is scheduled back in. This avoids the FPU access
trap overhead when switching between multiple threads with heavy FPU
usage.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-24 15:26:18 +01:00
Nicolas Pitre ff07da6ff1 riscv: integrate the new FPU context switching support
FPU context switching is always performed on demand through the FPU
access exception handler. Actual task switching only grants or denies
FPU access depending on the current FPU owner.

Because RISC-V doesn't have a dedicated FPU access exception, we must
catch the Illegal Instruction exception and look for actual FP opcodes.

There is no longer a need to allocate FPU storage on the stack for every
exception making esf smaller and stack overflows less likely.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-24 15:26:18 +01:00
Nicolas Pitre cb4c0f6c94 riscv: smarter FPU context switching support
Instead of saving/restoring FPU content on every exception and task
switch, this replaces FPU sharing support with a "lazy" (on-demand)
context switching algorithm similar to the one used on ARM64.

Every thread starts with FPU access disabled. On the first access the
FPU trap is invoked to:

- flush the FPU content to the previous thread's memory storage;

- restore the current thread's FPU content from memory.

When a thread loads its data in the FPU, it becomes the FPU owner.

FPU content is preserved across task switching, however FPU access is
either allowed if the new thread is the FPU owner, or denied otherwise.
A thread may claim FPU ownership only through the FPU trap. This way,
threads that don't use the FPU won't force an FPU context switch.
If only one running thread uses the FPU, there will be no FPU context
switching to do at all.

It is possible to do FP accesses in ISRs and syscalls. This is not the
norm though, so the same principle is applied here, although exception
contexts may not own the FPU. When they access the FPU, the FPU content
is flushed and the exception context is granted FPU access for the
duration of the exception. Nested IRQs are disallowed in that case to
dispense with the need to save and restore exception's FPU context data.

This is the core implementation only to ease reviewing. It is not yet
hooked into the build.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-24 15:26:18 +01:00
Nicolas Pitre 4f9b547ebd riscv: smp: prepare for more than one IPI type
Right now this is hardcoded to z_sched_ipi(). Make it so that other IPI
services can be added in the future.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-24 15:26:18 +01:00
Daniel Leung db495a5ebe xtensa: stop execution under simulator for double exception
If running under Xtensa simulator, it is good to tell simulator
to stop execution once we reach double exception, as the current
double exception handler is simply an endless loop. If we turn
on tracing in the simulator, the output file would contain
an infinite iteration of this endless loop, and the simulator
needs to be stopped manually before the file size goes out of
control. So we need to tell the simulator to stop once
we reach this point instead of doing an endless loop.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-01-23 10:09:18 +00:00
Nicolas Pitre 883e9d367f riscv: translate CPU numbers to hartid values for IPI
Given the Zephyr CPU number is no longer tied to the hartid, we must
consider the actual hartid when sending an IPI to a given CPU. Since
those hartids can be anything, let's just save them in the cpu structure
as each CPU is brought online.

While at it, throw in some `get_hart_msip()` cleanups.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-19 13:48:42 +01:00
Nicolas Pitre 26d7bd47a0 riscv: decouple the Zephyr CPU number from the hart ID
Currently it is assumed that Zephyr CPU numbers match their hartid
value one for one. This assumption was relied upon to efficiently
retrieve the current CPU's `struct _cpu` pointer.

People are starting to have systems with a mix of different usage for
each CPU and such assumption may no longer be true.

Let's completely decouple the hartid from the Zephyr CPU number by
stuffing each CPU's `struct _cpu` pointer in their respective scratch
register instead. `arch_curr_cpu()` becomes more efficient as well.

Since the scratch register was previously used to store userspace's
exception stack pointer, that is now moved into `struct _cpu_arch`
which implied minor user space entry code cleanup and rationalization.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-19 13:48:42 +01:00
Nicolas Pitre 96a65e2fc0 riscv: don't include the secondary CPU boot code when not needed
Linker garbage collection couldn't work due to the explicit reference
in reset.S.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-19 13:48:42 +01:00
Dat Nguyen Duy 50e77c2f9a arch: arm: aarch32: cortex_a_r: disable interrupts before context switching
Ultil now Cortex A/R aarch32 implementation for context
switching expects that interrupts was disabled. This is
true if a context switching happens at thread context.

But if a context switching happens at last action during
interrupt context, this assumption is not true because the
interrupts are still enabled (to allow nesting interrupts).

Disable interrupts at the last interrupt action to ensure
the interrupts are always disabled before context switching
is processed

Signed-off-by: Dat Nguyen Duy <dat.nguyenduy@nxp.com>
2023-01-18 16:22:29 +01:00
Flavio Ceolin c896b1e911 userspace: Do not use --relax flag
In platforms where the linker is capable of doing global optimizations,
like relaxing address mode and synthesize new instructions, Zephyr has to
disable it when enabling USERSPACE since the build expects that address
don't change after the first stage build.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-16 11:20:32 +00:00
Stephanos Ioannidis 4a64bfe351 treewide: Use CONFIG_CPP instead of CONFIG_CPLUSPLUS
This commit updates all in-tree code to use `CONFIG_CPP` instead of
`CONFIG_CPLUSPLUS`, which is now deprecated.

Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
2023-01-13 17:42:55 -05:00
Jordan Yates 35e78c4502 arch: arm: return arm specific fatal error reasons
Return specific fault reasons instead of the generic
`K_ERR_CPU_EXCEPTION`, which provides minimal debugging aid.

Fixes #53093.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2023-01-12 10:01:05 +01:00
Carlo Caione 34e0294652 arm: aarch32: Use proper sys functions for cache mainteinance
This patchset is fixing two things:

1. The proper sys_* functions are used for cache mainteinance
   operations.

2. To check the status of the L1 cache the SCB registers are probed so
   the code is assuming a core architecture cache is present, thus make
   the code conditionally compiled on CONFIG_ARCH_CACHE.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-01-10 18:22:32 -05:00
Tomasz Bursztyka c9ef674f0e arch/x86: Fix compilation error
arch_dcache_range() function does not exist anymore, nor K_CACHE_WB
macro. Removing it entirely.

arch_dcache_flush_range() signature changed, so relevantly applying
these.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2023-01-10 14:06:33 +00:00
Carlo Caione 43a61329cc riscv: Allow SOC to override arch_irq_{lock,unlock,unlocked}
RISC-V has a modular design. Some hardware with a custom interrupt
controller needs a bit more work to lock / unlock IRQs.

Account for this hardware by introducing a set of new
z_soc_irq_* functions that can override the default behaviour.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-01-09 19:21:39 +01:00
Flavio Ceolin a6b4af4a93 ia32: irq: Remove unnecessary header
printk is not used anywhere in this file. Just remove it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Flavio Ceolin 1e83113e4f x86: fatal: Remove possible deadcode
Move esf_get_code to inside CONFIG_EXCEPTION_DEBUG guard.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Flavio Ceolin 816954156c x86: fatal: Remove possible deadcode
esf_get_sp is used only when CONFIG_THREAD_STACK_INFO is enabled.
Just move this function around to be inside an ifdef guard.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Carlo Caione c13d23a43e riscv: Add support for hardware stacking / unstacking
Some RISC-V SoCs implement a mechanism for hardware supported stacking /
unstacking of registers during ISR / exceptions. What happens is that on
ISR / exception entry part of the context is automatically saved by the
hardware on the stack without software intervention, and the same part
of the context is restored by the hardware usually on mret.

This is currently not yet supported by Zephyr, where the full context
must be saved by software in the full fledged ESF. This patcheset is
trying to address exactly this case.

At least three things are needed to support in a general fashion this
problem: (1) a way to store in software only the part of the ESF not
already stacked by hardware, (2) a way to restore in software only the
part of the context that is not going to be restored by hardware and (3)
a way to define a custom ESF.

Point (3) is important because the full ESF frame is now composed by a
custom part depending on the hardware (that can choose which register to
stack / unstack and the order they are saved onto the stack) and a part
defined in software for the remaining part of the context.

In this patch a new CONFIG_RISCV_SOC_HAS_ISR_STACKING is introduced that
enables the code path supporting the three points by the mean of three
macros that must be implemented by the user in a soc_stacking.h file:
SOC_ISR_SW_STACKING, SOC_ISR_SW_UNSTACKING and SOC_ISR_STACKING_ESF
(refer to the symbol help for more details).

This is an example of soc_isr_stacking.h for an hardware that doesn't do
any hardware stacking / unstacking but everything is managed in
software:

    #ifndef __SOC_ISR_STACKING
    #define __SOC_ISR_STACKING

    #if !defined(_ASMLANGUAGE)

    #define SOC_ISR_STACKING_ESF_DECLARE \
    	struct __esf { \
    		unsigned long ra; \
    		unsigned long t0; \
    		unsigned long t1; \
    		unsigned long t2; \
    		unsigned long t3; \
    		unsigned long t4; \
    		unsigned long t5; \
    		unsigned long t6; \
    		unsigned long a0; \
    		unsigned long a1; \
    		unsigned long a2; \
    		unsigned long a3; \
    		unsigned long a4; \
    		unsigned long a5; \
    		unsigned long a6; \
    		unsigned long a7; \
    		unsigned long mepc; \
    		unsigned long mstatus; \
    		unsigned long s0; \
    	} __aligned(16)
    #else

    #define SOC_ISR_SW_STACKING \
    	addi sp, sp, -__z_arch_esf_t_SIZEOF; \
    	DO_CALLER_SAVED(sr);

    #define SOC_ISR_SW_UNSTACKING \
    	DO_CALLER_SAVED(lr);

    #endif /* _ASMLANGUAGE */
    #endif /* __SOC_ISR_STACKING */

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-01-09 10:15:07 +01:00
Piotr Jasiński 7fa0af01cf Kconfig: add config for low-priority debug mon isr
Debug monitor needs to be configured to a low priority in order to be
useful for debugging (to prioritize other interrupts when waiting on a
breakpoint).
Added a config that configures the interrupt this way.

Signed-off-by: Piotr Jasiński <piotr.jasinski@nordicsemi.no>
2022-12-28 12:00:46 +01:00
Piotr Jasiński 1fe4b1eb90 linker: make debug monitor isr symbol weak
It seems that currently it's impossible to create a custom
implementation for debug monitor exception without updating the vector
table (z_arm_debug_monitor maps to fault).
My proposition is to make this symbol weak, so that it can be overriden.

Signed-off-by: Piotr Jasiński <piotr.jasinski@nordicsemi.no>
2022-12-28 12:00:46 +01:00
Lucas Tamborrino 9e289c1b20 arch: xtensa: save FPU register in context switching
Save FP user register and FP register file during context switch.

This change enables shared FP registers mode using CONFIG_FPU_SHARING.

Since there is no lazy stacking, the FPU registers will be saved regardless
of whether floating point calculations are performed in the threads when
CONFIG_FPU_SHARING is enabled. This require 72 additional bytes in the
stack memory.

Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
2022-12-27 13:23:17 +01:00
Evgeniy Paltsev 93b98fc2de ARC: introduce reworked irq_offload implementation
Use interrupts (with dedicated interrupt line) for irq_offload
instead of exception-based implementation.

That allows to implement IRQ_OFFLOAD without adding special code
to interrupt / exception path for IRQ_OFFLOAD handling and,
moreover, test the real interrupt code.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2022-12-20 22:51:24 +01:00
Siyuan Cheng 5854670b98 DSP: add dsp unit test
add dsp context switch test
add complex multiplication test for ARC processor

Signed-off-by: Siyuan Cheng <siyuanc@synopsys.com>
2022-12-19 11:56:55 +01:00
Siyuan Cheng 70ff49af37 DSP: add DSP support for ARC processor
add DSP reg in context switch
add AGU reg in context switch to support XY mem
add thread option and API to dis/enable DSP switch

Signed-off-by: Siyuan Cheng <siyuanc@synopsys.com>
2022-12-19 11:56:55 +01:00