Commit graph

159 commits

Author SHA1 Message Date
Huifeng Zhang 1b9c824f0b arch: arm64: arm_mpu: enable d-cache
Enable d-cache at the stage of the EL1 initialization and fix some cache
coherence issues.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2023-09-01 13:23:26 +02:00
Anas Nashif 6baa622958 arch: move exc_handle.h under zephyr/arch/common
This header is private and included only in architecture code, no need for
it to be in the top of the public include directory.

Note: This might move to a more private location later. For now just
cleaning up the obvious issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-08-31 09:19:19 -04:00
Chirag Kochar b05c2f4718 arch: arm64: core: Fix issue IPI for non-booted core
During Zephyr boot with SMP enabled,
while z_smp_init is not completed yet and only boot core is running,
incorrect code in broadcast_ipi API will cause following:
1. Generate IPI even if other cores are not booted.
2. Incorrect setting of sgi1r register.
3. All the affinity(1/2/3) value will be incorrect.

Signed-off-by: Chirag Kochar <chirag.kochar@intel.com>
2023-08-31 10:24:58 +02:00
Chad Karaginides f5ff62f35a arch: arm64: Reserved Cores
Enhanced arch_start_cpu so if a core is not available based on pm_cpu_on
return value, booting does not halt.  Instead the next core in
cpu_node_list will be tried.  If the number of CPU nodes described in the
device tree is greater than CONFIG_MP_MAX_NUM_CPUS then the extra cores
will be reserved and used if any previous cores in the cpu_node_list fail
to power on.  If the number of cores described in the device tree matches
CONFIG_MP_MAX_NUM_CPUS then no cores are in reserve and booting will
behave as previous, it will halt.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-08-28 15:57:19 +02:00
Hou Zhiqiang 3b98cfe1c4 arm64: mmu: add Non-cacheable normal memory mapping support
In some shared-memory use cases between Zephyr and other parallel
running OS, for data coherent, the non-cacheable normal memory
mapping is needed.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2023-08-28 11:41:51 +02:00
Florian La Roche 533c913b86 arm64: core/prep_c.c func prototype before function itself
Move the function prototype before declaration of the function itself.
Maybe the prototype could be removed altogether?

Signed-off-by: Florian La Roche <Florian.LaRoche@gmail.com>
2023-08-23 10:02:54 +02:00
Jaxson Han 0f7bbff050 arch: arm64: Refine v8R AArch64 MPU regions switch
The current mechanism of the MPU region switching configures and
reprograms the regions (including inserting, splitting the dynamic
region, and flushing the regions to the registers) every time during the
context switch. This, not only causes a large usage of the kernel stack
but also a lower performance.

To improve it, move the configuration operations ahead to make sure the
context swtich only flushes the current thread regions to the registers
and does not configure the regions anymore. To achieve this, configure
the regions during any operations related to partitions (partition
add/remove, and domain add/remove thread), flush the sys_dyn_regions if
the current thread is the privileged thread, and flush the thread's own
regions if it's a user thread.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-08-21 13:27:07 +02:00
Jaxson Han c039e05724 arch: arm64: mpu: Use BR mode to flush the MPU regions
Using BR(background region) during the flushing regions instead of
enabling/disabling the MPU which is a heavy operation.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-08-21 13:27:07 +02:00
Grant Ramsay 20e297a295 arch: arm64: Make ARM64_VA_BITS/ARM64_PA_BITS named choices
Naming these choices allows setting a default value in defconfig.

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2023-08-18 10:13:12 +02:00
Mykola Kvach c9b534c4eb arch: arm64: mmu: avoid using of set/way cache instructions
Architecturally, Set/Way operations are not guaranteed to affect all
caches prior to the PoC, and may require other IMPLEMENTATION DEFINED
maintenance (e.g. MMIO control of system-level caches).

First of all this patch was designed for Xen domain Zephyr build, set/way
ops are not easily virtualized by Xen. S/W emulation is disabled, because
IP-MMU is active for Dom0. IP-MMU is a IO-MMU made by Renesas, as any good
IO-MMU, it shares page-tables with CPU. Trying to emulate S/W with IP-MMU
active will lead to IO-MMU faults. So if we build Zephyr as a Xen Initial
domain, it won't work with cache management support enabled.

Exposing set/way cache maintenance to a virtual machine is unsafe, not
least because the instructions are not permission-checked, but also
because they are not broadcast between CPUs.

In this commit, VA data invalidate invoked after every mapping instead of
using set/way instructions on init MMU. So, it was easy to delete
sys_cache_data_invd_all from enable MMU function, becase every adding of
a new memory region to xclat tabes will cause invalidating of this memory
and in this way we sure that there are not any stale data inside.

Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
2023-08-17 15:15:15 +02:00
Gerard Marull-Paretas 28c139f653 drivers: pm_cpu_ops: psci: provide sys_poweroff hook
Instead of implementing a custom power off API (pm_system_off),
implement the sys_poweroff hook, and indicate power off is supported by
selecting HAS_POWEROFF. Note that according to the PSCI specification
(DEN0022E), the SYSTEM_OFF operation does not return, however, an error
is printed and system is halted in case this occurs.

Note that the pm_system_off has also been deleted, from now on, systems
supporting PSCI should enable CONFIG_POWEROFF and call the standard
sys_poweroff() API.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-08-04 16:59:36 +02:00
Girisha Dengi 75547dd522 soc: arm64: Add agilex5 soc folder and its configurations
Add Agilex5 soc folder, MMU table and its configurations for
Intel SoC FPGA Agilex5 platform for initial bring up.
Add ARM Cortex-a76 and Cortex-a55 HMP cluster type.

Signed-off-by: Teik Heng Chong <teik.heng.chong@intel.com>
Signed-off-by: Girisha Dengi <girisha.dengi@intel.com>
2023-07-25 16:58:01 +00:00
Carlo Caione 5b2d716e76 arm64: Keep the frame pointer in leaf functions
It helps with debugging and usually on ARM64 we don't care about a small
increase in code size.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-17 10:14:18 +00:00
Carlo Caione c9a68afede arm64: Set FP to NULL before jumping to C code
When the frame-pointer based unwinding is enabled, the stop condition
for the stack backtrace is (FP == NULL).

Set FP to 0 before jumping to C code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-14 09:33:00 +00:00
Carlo Caione 18636af6d6 arm64: Add frame-pointer based stack unwinding
Add the frame-pointer based stack unwinding on ARM64.

This is a typical output with the feature enabled:

*** Booting Zephyr OS build zephyr-v3.4.0-1029-gae22ff648c16 ***
E: ELR_ELn: 0x00000000400011c8
E: ESR_ELn: 0x0000000096000046
E:   EC:  0x25 (Data Abort taken without a change in Exception level)
E:   IL:  0x1
E:   ISS: 0x46
E: FAR_ELn: 0x0000000000000000
E: TPIDRRO: 0x0100000040011650
E: x0:  0x0000000000000000  x1:  0x0000000000000003
E: x2:  0x0000000000000003  x3:  0x000000004005c6a0
E: x4:  0x0000000000000000  x5:  0x000000004005c7f0
E: x6:  0x0000000048000000  x7:  0x0000000048000000
E: x8:  0x0000000000000005  x9:  0x0000000000000000
E: x10: 0x0000000000000000  x11: 0x0000000000000000
E: x12: 0x0000000000000000  x13: 0x0000000000000000
E: x14: 0x0000000000000000  x15: 0x0000000000000000
E: x16: 0x0000000040004290  x17: 0x0000000000000000
E: x18: 0x0000000000000000  lr:  0x0000000040001208
E:
E: backtrace  0: fp: 0x000000004005c690 lr: 0x0000000040001270
E: backtrace  1: fp: 0x000000004005c6b0 lr: 0x0000000040001290
E: backtrace  2: fp: 0x000000004005c7d0 lr: 0x0000000040004ac0
E: backtrace  3: fp: 0x000000004005c7f0 lr: 0x00000000400013a4
E: backtrace  4: fp: 0x000000004005c800 lr: 0x0000000000000000
E:
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Current thread: 0x40011310 (unknown)
E: Halting system

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-13 17:04:05 +02:00
Jaxson Han 5d643ecd24 arch: arm64: Fix z_arm64_fatal_error declaration error
The z_arm64_fatal_error should be
extern void z_arm64_fatal_error(unsigned int reason, z_arch_esf_t *esf);

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-07-11 15:04:54 +02:00
Jaxson Han 6fc3903bbe arch: arm64: mpu: Fix some minor CHECKIF issues
There are two CHECKIFs whose check condition is wrong.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-07-11 15:04:54 +02:00
Mykola Kvach d0472aae7a soc: arm64: add Renesas Rcar Gen3 SoC support
Add files for supporting arm64 Renesas r8a77951 SoC.
Add config option CPU_CORTEX_A57.

Enable build of clock_control_r8a7795_cpg_mssr.c for
a new ARM64 SoC R8A77951.

Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
2023-07-11 11:17:41 +02:00
Roberto Medina 6622735ea8 arch: arm64: add support for coredump
* Add support for coredump on ARM64 architectures.
* Add the script used for post-processing coredump output.

Signed-off-by: Marcelo Ruaro <marcelo.ruaro@huawei.com>
Signed-off-by: Rodrigo Cataldo <rodrigo.cataldo@huawei.com>
Signed-off-by: Roberto Medina <roberto.medina@huawei.com>
2023-07-03 09:32:26 +02:00
Luca Fancellu c8a9634a30 arch: arm64: Use atomic for fpu_owner pointer
Currently the lazy fpu saving algorithm in arm64 is using the fpu_owner
pointer from the cpu structure to understand the owner of the context
in the cpu and save it in case someone different from the owner is
accessing the fpu.

The semantics for memory consistency across smp systems is quite prone
to errors and reworks on the current code might miss some barriers that
could lead to inconsistent state across cores, so to overcome the issue,
use atomics to hide the complexity and be sure that the code will behave
as intended.

While there, add some isb barriers after writes to cpacr_el1, following
the guidance of ARM ARM specs about writes on system registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
2023-06-08 09:35:11 -04:00
Jaxson Han 08791d5fae arch: arm64: Enable FPU and FPU_SHARING for v8r aarch64
This commit is to enable FPU and FPU_SHARING for v8r aarch64.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han 0b3136c2e8 arch: arm64: Add cpu_map to map cpu id and mpid
cpu_node_list does not hold the corrent mapping of cpu id and mpid when
core booting sequence does not follow the DTS cpu node sequence. This
will cause an issue that sgi cannot deliver to the right target.

Add the cpu_map array to hold the corrent mapping between cpu id and
mpid.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han f03a4cec57 arch: arm64: Fix the STACK_INIT logic during the reset
Each core should init their own stack during the reset when SMP enabled,
but do not touch others. The current init results in each core starting
init the stack from the same address which will break others.

Fix the issue by setting a correct start address.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han 101ae5d240 arch: arm64: mpu: Remove LOG print before mpu enabled
LOG system has unalignment access instruction which will cause an
alignment exception before MPU is enabled. Remove the LOG print before
MPU is enabled to avoid this issue.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Andy Ross b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Chad Karaginides 9ab7354a46 arch: arm64: SCR_EL3 EEL2 Enablement
For secure EL2 to be entered the EEL2 bit in SCR_EL3 must be set.  This
should only be set if Zephyr has not been configured for NS mode only,
if the device is currently in secure EL3, and if secure EL2 is supported
via the SEL2 bit in AA64PFRO_EL1.  Added logic to enable EEL2 if all
conditions are met.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-26 13:51:50 -04:00
Chad Karaginides f8b9faff24 arch: arm64: Added ISBs after SCTLR Modifications
Per the ARMv8 architecture document, modification of the system control
register is a context-changing operation. Context-changing operations are
only guaranteed to be seen after a context synchronization event.
An ISB is a context synchronization event.  One has been placed after
each SCTLR modification. Issue was found running full speed on target.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-25 16:33:03 -04:00
Nicolas Pitre 8e9872a7c8 arm64: prevent possible deadlock on SMP with FPU sharing
Let's consider CPU1 waiting on a spinlock already taken by CPU2.

It is possible for CPU2 to invoke the FPU and trigger an FPU exception
when the FPU context for CPU2 is not live on that CPU. If the FPU context
for the thread on CPU2 is still held in CPU1's FPU then an IPI is sent
to CPU1 asking to flush its FPU to memory.

But if CPU1 is spinning on a lock already taken by CPU2, it won't see
the pending IPI as IRQs are disabled. CPU2 won't get its FPU state
restored and won't complete the required work to release the lock.

Let's prevent this deadlock scenario by looking for pending FPU IPI from
the spinlock loop using the arch_spin_relax() hook.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-05-25 08:25:11 +00:00
Carlo Caione 6f3a13d974 barriers: Move __ISB() to the new API
Remove the arch-specific ARM-centric __ISB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione cb11b2e84b barriers: Move __DSB() to the new API
Remove the arch-specific ARM-centric __DSB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione 2fa807bcd1 barriers: Move __DMB() to the new API
Remove the arch-specific ARM-centric __DMB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione c2ec3d49cd arm64: cache: Enable full inlining of the cache ops
Move all the cache functions to a standalone header to allow for the
inlining of these functions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-04-20 14:56:09 -04:00
Peter Mitsis 1e250b37df arch: arm64: Remove unused absolute symbols
Removes ARM64 specific unused absolute symbols that are defined
via the GEN_ABSOLUTE_SYM() macro.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-04-18 10:51:28 -04:00
Gerard Marull-Paretas a5fd0d184a init: remove the need for a dummy device pointer in SYS_INIT functions
The init infrastructure, found in `init.h`, is currently used by:

- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices

They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:

```c
struct init_entry {
	int (*init)(const struct device *dev);
	/* only set by DEVICE_*, otherwise NULL */
	const struct device *dev;
}
```

As a result, we end up with such weird/ugly pattern:

```c
static int my_init(const struct device *dev)
{
	/* always NULL! add ARG_UNUSED to avoid compiler warning */
	ARG_UNUSED(dev);
	...
}
```

This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:

```c
static int my_init(void)
{
	...
}
```

This is achieved using a union:

```c
union init_function {
	/* for SYS_INIT, used when init_entry.dev == NULL */
	int (*sys)(void);
	/* for DEVICE*, used when init_entry.dev != NULL */
	int (*dev)(const struct device *dev);
};

struct init_entry {
	/* stores init function (either for SYS_INIT or DEVICE*)
	union init_function init_fn;
	/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
	 * to know which union entry to call.
	 */
	const struct device *dev;
}
```

This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.

**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

init: convert SYS_INIT functions to the new signature

Conversion scripted using scripts/utils/migrate_sys_init.py.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

manifest: update projects for SYS_INIT changes

Update modules with updated SYS_INIT calls:

- hal_ti
- lvgl
- sof
- TraceRecorderSource

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: devicetree: devices: adjust test

Adjust test according to the recently introduced SYS_INIT
infrastructure.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: kernel: threads: adjust SYS_INIT call

Adjust to the new signature: int (*init_fn)(void);

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-12 14:28:07 +00:00
Jaxson Han e416c5f1bd arch: arm64: Update current stack limit on every context switch
Update current stack limit on every context switch, including switching
to irq stack and switching back to thread stack.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 00adc0b493 arch: arm64: Enable safe exception stack
This commit mainly enable the safe exception stack including the stack
switch. Init the safe exception stack by calling
z_arm64_safe_exception_stack during the boot stage on every core. Also,
tweaks several files to properly switch the mode with different cases.

1) The same as before, when executing in userspace, SP_EL0 holds the
user stack and SP_EL1 holds the privileged stack, using EL1h mode.

2) When entering exception from EL0 then SP_EL0 will be saved in the
_esf_t structure. SP_EL1 will be the current SP, then retrieves the safe
exception stack to SP_EL0, making sure the always pointing to safe
exception stack as long as the system running in kernel space.

3) When exiting exception from EL1 to EL0 then SP_EL0 will be restored
from the stack value previously saved in the _esf_t structure. Still at
EL1h mode.

4) Either entering or exiting exception from EL1 to EL1, SP_EL0 will
keep holding the safe exception stack unchanged as memtioned above.

5) Do a quick stack check every time entering the exception from EL1 to
EL1. If check fail, set SP_EL1 to safe exception stack, and then handle
the fatal error.

Overall, the exception from user mode will be handled with kernel stack
at the assumption that it is impossible the stackoverflow happens at the
entry of exception from EL0 to EL1. However the exception from kernel
mode will be firstly checked with the safe exception stack to see if the
kernel stack overflows, because the exception might be triggered by
stack invalid accessing.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 463b1c9396 arch: arm64: Add safe exception stack init function
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 3a5fa0498f arch: arm64: Add stack_limit to thread_arch_t
Add stack_limit to thread_arch_t to store the thread's stack limit.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 61b8b34b27 arch: arm64: Add the sp variable in _esf_t
As the preparation for enabling safe exception stack, add a variable in
_esf_t to save the user stack held by sp_el0 at the point of the
exception happening from EL0. The newly added variable in _esf_t is
named sp from which the user stack will be restored when exceptions eret
to EL0.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 7040c55438 arch: arm64: Add stack check relevant variables to _cpu_arch_t
Add three per-cpu variables for the convenience of quickly accessing.

The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han d8d74b1320 arch: arm64: Add el label for vector entry macro
Add a new label el for z_arm64_enter_exc to indicate which el the
exception comes from.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 6c40abb99f arch: arm64: Introduce safe exception stack
Introduce two configs to prepare to enable the safe exception stack for
the kernel space. This is the preparation for enabling hardware stack
guard. Also define the safe exception stack for kernel exception stack
check.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Nicolas Pitre bcef633316 arch/arm64/mmu: arch_mem_map() should not overwrite existing mappings
If so this is most certainly a bug. arch_mem_unmap() should be
used before mapping the same area again.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre 364d7527c1 arch/arm64/mmu: minor addition to debugging code
Display table allocations and whether or not mappings
can be overwritten.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre abb50e1605 arch/arm64/mmu: fix table discarding code
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.

Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-07 08:33:05 +01:00
Henri Xavier 45af717a66 arch/arm64: Implement ASID support in ARM64 MMU
Improves context-switch performance.

TLB invalidation and the nG bit are used conservatively. This could
be improved in future work.

Tested with tests/benchmarks/sched_userspace:

BEFORE:
```
Swapping  2 threads: 161562583 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping  8 threads: 161569289 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping 16 threads: 161649163 cyc & 1000000 rounds ->   1616 ns per ctx
Swapping 32 threads: 163487880 cyc & 1000000 rounds ->   1634 ns per ctx
```

AFTER:
```
Swapping  2 threads: 18129207 cyc & 1000000 rounds ->    181 ns per ctx
Swapping  8 threads: 49702891 cyc & 1000000 rounds ->    497 ns per ctx
Swapping 16 threads: 55898650 cyc & 1000000 rounds ->    558 ns per ctx
Swapping 32 threads: 58059704 cyc & 1000000 rounds ->    580 ns per ctx
```

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-12-13 17:21:11 +09:00
Carlo Caione 385f06e6a1 arm64: cache: Rework cache API
And use the new API.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Carlo Caione 189cd1f4a2 cache: Rework cache API
The cache operations must be quick, optimized and possibly inlined. The
current API is clunky, functions are not inlined and passing parameters
around that are basically always known at compile time.

In this patch we rework the cache functions to allow us to get rid of
useless parameters and make inlining easier.

In particular this changeset is doing three things:

1. `CONFIG_HAS_ARCH_CACHE` is now `CONFIG_ARCH_CACHE` and
   `CONFIG_HAS_EXTERNAL_CACHE` is now `CONFIG_EXTERNAL_CACHE`

2. The cache API has been reworked.

3. Comments are added.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Henri Xavier b54ba9877f arm64: implement arch_system_halt
When PSCI is enabled, implement `arch_system_halt` using
PSCI_SHUTDOWN.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-23 11:37:08 +01:00
Henri Xavier 55df3852bf arm64: Add comment that arch_curr_cpu and get_cpu must be synced
The is code duplication as one is in C, and the other is an assembly
macro. As there is no easy way to find out about this duplication,
adding a comment seems the best way to go.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-02 16:08:00 -05:00