This commit selectively disables the infinite recursion warning
(`-Winfinite-recursion`), which may be reported by GCC 12 and above,
for the `disable_table` function because no actual infinite recursion
will occur under normal circumstances.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
arch_mem_map() on ARM64 is currently not supporting the K_MEM_PERM_USER
parameter so we cannot allocate userspace accessible memory using the
memory helpers. Fix this.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
<soc.h> has been traditionally been used as a proxy to HAL headers,
register definitions, etc. Nowadays, <soc.h> is anarchy. It serves a
different purpose depending on the SoC. In some cases it includes HALs,
in some others it works as a header sink/proxy (for no good reason), as
a register definition when there's no HAL... To make things worse, it is
being included in code that is, in theory, non-SoC specific.
This patch is part of a series intended to improve the situation by
removing <soc.h> usage when not needed, and by eventually removing it.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init stack of the secondary core should use KERNEL_STACK_BUFFER + sz
Using Z_THREAD_STACK_BUFFER will calculate the wrong stack size.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
The current SMP boot code doesn't consider that the cores can boot at
the same time. Possibly, more than one core can boot into primary core
boot sequence. Fix it by using the atomic operation to make sure only
one core act as the primary core.
Correspondingly, sgi_raise_ipi should transfer CPU id to mpidr.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Fix writing of ICC_SRE_EL3 to or-in bits to align
with original intent to read-modify-write this
register.
Also disable FIQ and IRQ bypass so interrupt delivery
occurs through GIC. Platforms may choose to override
this behavior in z_arm64_el3_plat_init implementations.
Remove ICC_SRE_EL3 config from viper and qemu since
this is now handled in the arm64 arch core.
Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Commit d8f186aa4a ("arch: common: semihost: add semihosting
operations") encapsulated semihosting invocation in a per-arch
semihost_exec() function. There is a fixed register variable declaration
for the return value but this variable is not listed as an output
operand to respective inline assembly segments which is an error.
This is not reported as such by gcc and the generated code is still OK
in those particular instances but this is not guaranteed, and clang
does complain about such cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add an API that utilizes the ARM semihosting mechanism to interact with
the host system when a device is being emulated or run under a debugger.
RISCV is implemented in terms of the ARM implementation, and therefore
the ARM definitions cross enough architectures to be defined 'common'.
Functionality is exposed as a separate API instead of syscall
implementations (`_lseek`, `_open`, etc) due to various quirks with
the ARM mechanisms that means function arguments are not standard.
For more information see:
https://developer.arm.com/documentation/dui0471/m/what-is-semihosting-
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
impl
In ARM parlance, the subroutine call return address is stored in the
"link register" or simply lr. Refer to it as lr which is clearer than
the anonymous x30 designation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
ARM64 supports more memory mapping types for device memory (nGnRnE,
nGnRE, GRE), add these mapping support for os common mapping API
function z_phys_map().
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
It is not necessary to go through the full exception exit code.
This is simpler, smaller and faster.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Make it optimal without the need for an SVC/exception roundtrip on
every context switch. Performance numbers from tests/benchmarks/sched:
Before:
unpend 85 ready 58 switch 258 pend 231 tot 632 (avg 699)
After:
unpend 85 ready 59 switch 115 pend 138 tot 397 (avg 478)
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Get rid of all those global variables and scheduler locking.
Use the reguler IRQ exit path to let tests properly validate preemption.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add CONFIG_SMP to fvp_baser_aemv8r_smp board.
Fix compile warnings by adding missing header file in arm_mpu.c.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This commit mainly fixes the broadcast_ipi issue when one core broadcast
ipi to other cores using gic_raise_sgi. The issue doesn't affect the
functionality of Zephyr SMP but will happen when Zephyr runs on Xen.
Suppose Xen provides 4 CPUs to the Zephyr guest, for example, when cpu0
broadcasts ipi to the rest of the cores, the mask should be 0xE(0b1110),
but for now, Zephyr will send 0xFFFE. So for Xen, it will receive a
target list containing many invalid CPUs which don't exist. My solution
is: to generate the target list according to the online CPUs.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
In the Armv8R AArch64 profile[1], the Armv8R AArch64 is always in secure
mode. But the FVP_BaseR_AEMv8R before version 11.16.16 doesn't strictly
follow this rule. It still has some non-secure registers
(e.g. CNTHP_CTL_EL2).
Since version 11.16.16, the FVP_BaseR_AEMv8R has fixed this issue. The
CNTHP_XXX_EL2 registers have been changed to CNTHPS_XXX_EL2. So the
FVP_BaseR_AEMv8R (version >= 11.16.16) cannot boot Zephyr. This patch
will fix it.
[1] https://developer.arm.com/documentation/ddi0600/latest/
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Change-Id: If986f34dc080ae7a8b226bba589b6fe616a4260b
Currently, the DCACHE range invalidation can cause data corruption when
the ends of the given range is not aligned to a full cache line.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Avoid executing ISRs using the thread stack as it might not be sized
for that. Plus, we do have IRQ stacks already set up for us.
The non-nested IRQ context is still (and has to be) saved on the thread
stack as the thread could be preempted.
The irq_offload case is never nested and always invoked with the
sched_lock held so it can be simplified a bit.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This is an uint32_t so the proper register width must be used, otherwise
the adjacent structure member will be overwritten (didn't happen in
practice because of struct member alignment but still). This makes the
inc_nest_counter and dec_nest_counter macros rather unwieldy, especially
with upcoming changes, so let's just remove them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Let's provide our own z_early_memset() and z_early_memcpy() rather than
making our own .bss clearing function that risk missing out on updates
to the main version.
Also remove extra stuff already provided by kernel_internal.h.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Avoid setting the switch_handler in the z_get_next_switch_handle() code
when the context is not fully saved yet to avoid a race against other
cores waiting on wait_for_switch().
See issue #40795 and discussion in #41840
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This commit adds support of Xen Enlighten page and initial support for
Xen event channels. It is needed for future Xen PV drivers
implementation.
Now enlighten page is mapped to the prepared memory area on
PRE_KERNEL_1 stage. In case of success event channel logic gets
inited and can be used ASAP after Zephyr start. Current implementation
allows to use only pre-defined event channels (PV console/XenBus) and
works only in single CPU mode (without VCPUOP_register_vcpu_info).
Event channel allocation will be implemented in future versions.
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.
Note that:
() For assertions where it can bail out early without side
effects, these are converted to CHECKIF(). (Usually means
that updating of page tables or translation tables has not
been started yet.)
() Other assertions are retained to signal fatal errors during
development.
() The additional CHECKIF() are structured so that it will bail
early if possible. If errors are encountered inside a loop,
it will still continue with the loop so it works as before
this changes with assertions disabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.
This is pretty much exactly where we want it, just after the context
saving steps (which we can't elide since the hook is in C) and
immediately before the tracing hook for ISR entry. And as I'm reading
the code, this is purely for Zephyr-registered interrupts, meaning
that software exceptions will be accounted for (correctly) as part of
the excepting thread.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit adds Xen hypervisor call interface for arm64 architecture.
This is needed for further development of Xen features in Zephyr.
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
In some drivers, noncache memory need to be used for dma coherent
memroy, so add nocache memory segment mapping and support for ARM64
platforms.
The following variables definition example shows they will use nocache
memory allocation:
int var1 __nocache;
int var2 __attribute__((__section__(".nocache")));
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
Add the arm64 MMU arch_virt_region_align() implementation used
to return a possible virtual addres alignment in order to
optimize the MMU table layout and possibly avoid using L3 tables
and use some L1 & L3 blocks instead for most of the mapping.
Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
When mapping the following:
device_map(&base0, DEVA_BASE, DEVA_SIZE, K_MEM_CACHE_NONE);
device_map(&base1, DEVB_BASE , DEVB_SIZE, K_MEM_CACHE_NONE);
with:
- DEVA_SIZE not multiple of a 4KB granule L2 block size (0x200000)
- DEVB_SIZE more than 2 x 4KB granule L2 block size
The mmu code will fill the first device_map() in a L3 table, then
on the second mapping the mmu code will complete the previous L3
table.
At the end of this table, the actual code will select an L2 block
instead of a table because the *virtual address* is multiple with
the L2 block size.
But if the physical address is not, the virtual block offset will
be ORed to the physical address, and not added.
Leading to a weird scenario where virtual memory is duplicated
resulting of the addresses ORing and not addition.
Example:
device_map(&base0, DEVA_BASE, 0x20000, K_MEM_CACHE_NONE);
device_map(&base1, 0x44000000 , 0x400000, K_MEM_CACHE_NONE);
First will result in VA 0x5ffe0000 and second in VA 0x5fbe0000.
The MMU code will use a table to map 0x5ffe0000 to 0x5fbfffff.
For 0x5fc00000 to 0x5fdfffff, since the VA is multiple of an L2
block size, the L3 table is not used.
But the L2 block description entry address is 0x44060000, meaning
that for each access in this L2 block, the following will be done:
0x44060000 | (VA & 1FFFFF)
This is working for the 0x5fc40000 to 0x5fc5ffff access, but for the
0x5fbc60000 (0x5fbe0000 + 0x80000) access the PA gets calculated as :
0x44060000 | (0x5fc60000 & 1FFFFF) = 0x44060000 | 0x60000 = 0x44060000
Instead of the expected 0x44080000.
The solution is to check if the PA descriptor is aligned with the
level block size, if not move to the next level.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Add dynamic_areas_init. It will mark a mpu region as a dynamic region
area. The dynamic region areas is designed to be the background
regions, so that the system could re-program the thread regions on
the backgroud regions.
Add configure_dynamic_mpu_regions to re-program the thread regions on
the backgroud regions. The configure_dynamic_mpu_regions function is
the core function of implementing the userspace for the MPU. This
function is used in thread creation and context switch.
During context switch, the pre thread's regions should be disabled,
and the new thread's regions will be re-programed. Since the thread's
stack region will also be switched, there will be a problem before
new thread's region being re-programed which is the new thread's
stack usage. To avoid the exception generated by stack usage caused by
unprogramed new thread's stack region, I disable mpu first before
flush_dynamic_regions_to_mpu and then enable it.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add a new macro MEM_DOMAIN_ALIGN_AND_SIZE for mmu and mpu mem
alignment.
MEM_DOMAIN_ALIGN_AND_SIZE is
- CONFIG_MMU_PAGE_SIZE, when mmu is enabled.
- CONFIG_ARM_MPU_REGION_MIN_ALIGN_AND_SIZE when mpu enabled.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Include the new introduced include/arch/arm64/mm.h instead of the
arm_mmu.h or arm_mpu.h.
Unify function names z_arm64_thread_pt_init/z_arm64_swap_ptables with
z_arm64_thread_mem_domains_init/z_arm64_swap_mem_domains for mmu and
mpu, because:
1. mmu and mpu have almost the same logic.
2. mpu doesn't have ptables.
3. using the function names help reducing "#if define" macros.
Similarly, change z_arm64_ptable_ipi to z_arm64_domain_sync_ipi
And fix a log bug in arm_mmu.c.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This patch mainly moves mpu related code from
arch/arm64/core/cortex_r/mpu/ to arch/arm64/core/cortex_r/ and moves
the mpu header files from include/arch/arm64/cortex_r/mpu/ to
include/arch/arm64/cortex_r/
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Referring the Arm Generic Interrupt Controller Architecture
Specification GIC architecture version 3 and version 4 document
(see 2.2.1 Special INTIDs paragraph), these INTIDs are reserved
for special purposes and should be ignored for now.
For the ITS implementation, the INTID 1023 must be ignored since this
special INTID will trigger after an LPI acknowledge, thus triggering
the spurious interrupt handler.
The GICv3 Linux implementation ignores these INTIDs the same way.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
In case we enable a large number of IRQs, like when enabling LPIs using
interrupts > 8192, we hit an assembler error where the immediate value
is too large.
Copy the IRQ number into x1 to permit using a large IRQ number.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.
The symbols _image_text_start and _image_text_end sometimes includes
linker/kobject-text.ld. This mean there must be both the regular
__text_start and __text_end symbols for the pure text section, as well
as <group>_start and <group>_end symbols.
The symbols describing the text region which covers more than just the
text section itself will thus be changed to:
_image_text_start -> __text_region_start
_image_text_end -> __text_region_end
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.
The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.
The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.
This commit align those names and prepares for generation of groups in
linker scripts.
The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end -> __rom_region_end
The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end -> __rodata_region_end
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
This prevent the new thread to attempt accessing cached ptable entries
which are no longer valid.
Signed-off-by: Manuel Argüelles <manuel.arguelles@coredumplabs.com>
The arm64_smp_init() is the same initialization level
and priority as the GICv3 interrupt controller. This means
that arm64_smp_init() can be called before the interrupt
controller driver has been initialized if linker decides
to put the driver init entry later. This would result in
faults when arm64_smp_init() is trying to connect interrupts.
So move arm64_smp_init() to PRE_KERNEL_2 instead. SMP
initialization is called later in the boot process so
this should not affect SMP operations.
This is in preparation of making interrupt controller
drivers to be build as static library. The linking order
is going to change which will result in this being
initialized before the interrupt contoller driver.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.
Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.
Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
During mpu init, we check MSA_frac bits[55:52] and MSA bits[51:48] of
the ID_AA64MMFR0_EL1 register. Currently we only allow 1F to pass the
check. But according to Armv8-R AArch64 manual [1], both 1F and 2F
indicates the processor supports MPU. This commit aims at fixing this.
[1]: https://developer.arm.com/documentation/ddi0600/latest/
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
When SMP enabled, the primary core calls arch_start_cpu to start
secondary cpus. There is an assertion checking the core mpid to make
sure it is called by primary core.
But the checking is bogus. After the first secondary core is brought
up, arm64_cpu_boot_params.mpid will be changed, which will fail the
assertion.
The current solution restores the arm64_cpu_boot_params.mpid.
However, using the arch_curr_cpu()->id == 0 as the assertion will be
better.
The _current_cpu->id will always fail assertion inside this macro
(__ASSERT_NO_MSG(!z_smp_cpu_mobile()), so I use arch_curr_cpu()->id
instead.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
"arm64_cpu_boot_params.mpid" should be assigned to "master_core_mpid"
after secondary CPU core up.
Because "arm64_cpu_boot_params.mpid" is used to check the next up CPU
core's mpid is the excepted mpid. After excepted CPU core up, the
"arm64_cpu_boot_params.mpid" doesn't restore to primary CPU core's mpid
and then the primary CPU core try to up third CPU core will crash in
assertion.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.
This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.
Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.
This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both z_arm64_exit_exc and z_arm64_exit_exc_fpu_done must be within
the same section as execution falls through here.
If z_arm64_exit_exc_fpu_done creates a section of its own then the
linker is free to disjoint the code and we absolutely don't want that.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This adds FPU sharing support with a lazy context switching algorithm.
Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.
It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.
Because the thread object grows in size, some tests have to be adjusted.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.
The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add data barrier before and after dcachle flush or clean,
and restore to data cache level 0 after all ops.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
Moved all assembly code to c code. Fixed arch_dcache_line_size_get()
to get dcache line size by using "4 << dminline" and don't consider
CWG according to sample code in cotexta-v8 programer guider.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
The macro DT_FOREACH_CHILD will iterates all child nodes ignoring the
status property, this patch changes to use DT_FOREACH_CHILD_STATUS_OKAY
to avoid trying to bring up disabled cores, which only iterates the
enabled child nodes.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Change to load MPID for secondary cores adding offset macro
BOOT_PARAM_MPID_OFFSET.
Currently the code load MPID for secondary cores from offset 0x0
of the struct arm64_cpu_boot_params, it's working as currently
the macro BOOT_PARAM_MPID_OFFSET has value 0x0, but when the
location of the member "mpid" is changed, it can result in SMP
booting failure and the build assert won't throw out any warning.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Datas in data cache are dirty before data caches are enabled,
so need to invalidate all data caches firstly before enable
them.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
The ARM64 port is currently using SP_EL0 for everything: kernel threads,
user threads and exceptions. In addition when taking an exception the
exception code is still using the thread SP without relying on any
interrupt stack.
If from one hand this makes the context switch really quick because the
thread context is already on the thread stack so we have only to save
one register (SP) for the whole context, on the other hand the major
limitation introduced by this choice is that if for some reason the
thread SP is corrupted or pointing to some unaccessible location (for
example in case of stack overflow), the exception code is unable to
recover or even deal with it.
The usual way of dealing with this kind of problems is to use a
dedicated interrupt stack on SP_EL1 when servicing the exceptions. The
real drawback of this is that, in case of context switch, all the
context must be copied from the shared interrupt stack into a
thread-specific stack or structure, so it is really slow.
We use here an hybrid approach, sacrificing a bit of stack space for a
quicker context switch. While nothing really changes for kernel threads,
for user threads we now use the privileged stack (already present to
service syscalls) as interrupt stack.
When an exception arrives the code now switches to use SP_EL1 that for
user threads is always pointing inside the privileged portion of the
stack of the current running thread. This achieves two things: (1)
isolate exceptions and syscall code to use a stack that is isolated,
privileged and not accessible to user threads and (2) the thread SP is
not touched at all during exceptions, so it can be invalid or corrupted
without any direct consequence.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Fix:
arch/arm64/core/smp.c:98:3: error: 'cpu_mpid' may be used uninitialized
in this function [-Werror=maybe-uninitialized]
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The SMP boot code depends on physical CPU #0 to be first to boot and
subsequent CPUs to follow suit in a linear fashion. Let's decouple
physical and logical numbering so that any physical CPU can be the
boot CPU. This is based on a prior code proposal from
Jiafei Pan <Jiafei.Pan@nxp.com>.
This, however, was about to turn the boot code into some hairy mess.
So let's clean things up and simplify the code as well while at it.
Both the extension and the clean up aren't separate commits because
they actually depend on each other.
The BOOT_PARAM_*_OFFSET defines are locally hardcoded as there is no
point exposing the related structure widely. Build time assertions
ensure they don't go out of sync with the struct definition. And
vector_table.h is repurposed into boot.h to gather boot related
definitions.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
We can find caller of z_arm64_mmu_init is on primary
core or not, so no need to check mpidr, just add a
function parameter.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
This patch is fixing three related problems:
1. When calling a syscall the marshalling function is using the ssf
parameter as value to be saved in _current->syscall_frame to mark the
beginning and the end of the syscall. This ssf value is not currently
being explictly set and instead the syscall code is using whatever
value is stored in x6 when the syscall is called. If it happens that
x6 is 0 at the time the syscall is called, this causes the
z_is_in_user_syscall() function to fail. Fix this passing the ESF as
value for ssf.
2. Given that in the ssf is now present the ESF, we can fix
arch_syscall_oops() using the ESF to print a more detailed error
message with registers dump.
3. When a wrong syscall number is used, handler_bad_syscall() is called.
This function expects the ID number as first parameter to print the
error message, fix this.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
It doesn't hurt always having the image header and generating the binary
output. I find myself constantly setting those to 'y', so make it
definitive.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Thi GICv3 driver is configuring the controller accessing the system
registers ICC_*. To be able to do that without trapping we have to
explicitly set at boot in EL3 the value of the ICC_SRE_EL3 register that
is architecturally set to UNKNOWN value on warm reset.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Let's fully exploit tpidrro_el0 by storing in it the current CPU's
struct _cpu instance alongside the userspace mode flag bit. This
greatly simplifies the code needed to get at the cpu structure, and
this paves the way to much simpler multi cluster support, as there
is no longer the need to decode MPIDR all the time.
The same code is used in the !SMP case as there are benefits there too
such as avoiding the literal pool, and it looks cleaner.
The tpidrro_el0 value is no longer stored in the exception stack frame.
Instead, we simply restore the user mode flag based on the SPSR value.
This way, more flag bits could be used independently in the future.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
When ARM_MPU is defined, the MPU drivers will be built into the final
zephyr target.
Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Armv8-R AArch64 MPU can support a maximum 16 memory regions, and the
actual region number can be retrieved from the system register(MPUIR)
during MPU initialization.
Current MPU driver only suppots EL1.
Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add Cortex-R82 config to support the Cortex-R82 processor.
Introduce the new CPU_CORTEX_R_AARCH64 config for the Cortex-R 64-bit
processor.
Since the current CPU_CORTEX_R config has already been bound for
AArch32 in many test cases, we therefore add a new CPU_AARCH64_CORTEX_R
to distinguish from the Cortex-R 32-bit processor.
We do not use CPU_CORTEX_R64 because this name will lead to ambiguity
with processor name like Cortex-R82.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
The typical number of needed translation tables depends on memory
domain usage and userspace support, but also on the virtual address
space width due to the number of translation levels involved.
Reflect that in the default value.
Also fix a related comment where values were off by 1.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The structure for the arm64_cpu_init array has to carry the cache
alignment on the whole structure and not on some internal padding
to achieve the desired effect.
And align struct __esf to a 16-byte boundary which will also align
its size accordingly. This structure is allocated on the stack on
exception entry and the ABI prescribed 16-byte stack alignment
should be preserved.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
We can't do atomic memory operations before the MMU is on. Let's create
a code path to set up MMU page tables without any lock. There is
obviously no concurrency issues at this stage.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
GIC_INTID_SPURIOUS is a GIC-specific intid so it's not valid for custom
interrupt controllers. Rework a bit the logic by comparing the intid to
the maximum intid possible instead.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Turns out that we could flatten the tree further as there is not
that many files to warrant a whole directory for this.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>