For vectored interrupts use the generated IRQ vector table instead of
relying on a custom-generated table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The whole mechanism of IRQ table generation is build around the
assumption that the IRQ vector table contains an array of addresses the
PC will be assigned to when the corresponding interrupt is triggered.
While this is correct for the majority of architectures (ARM, RISCV with
CLIC in vectored mode, etc...) this is not valid in general (for example
RISCV with CLINT/HLINT in vectored mode).
In this alternative format for the IRQ vector table, the pc will get
assigned by the hardware to the address of the vector table index
corresponding to the interrupt ID. From the vector table index, a
subsequent jump will occur from there to service the interrupt.
This means that the IRQ vector table contains an opcode that is a jump
instruction to a specific location instead of the address of the
location itself.
This patch is introducing support for this alternative IRQ vector table
format. The user can now select one format or the other one by acting on
IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS or IRQ_VECTOR_TABLE_JUMP_BY_CODE
Kconfig symbols.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
By default ARCH_IRQ_VECTOR_TABLE_ALIGN and ARCH_SW_ISR_TABLE_ALIGN are
set to 0. Use a more proper value.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The generation of the software ISR table and the IRQ vector table
(respectively generated by CONFIG_GEN_SW_ISR_TABLE and
CONFIG_GEN_IRQ_VECTOR_TABLE) should (in theory) go through three stages:
1. A placeholder table is generated in arch/common/isr_tables.c and
placed in an orphaned .gnu.linkonce.{irq_vector_table, sw_isr_table}
section
2. The real table is generated by arch/common/gen_isr_tables.py (creating
the build/zephyr/isr_tables.c file)
3. The real table is un-orphaned by moving it in a proper section with a
proper alignment
While all the steps are done automatically for the software ISR table,
for the IRQ vector table each architectures must take care of modiying
its own linker script to place somewhere the generated IRQ vector table
(basically step 3 is missing).
This is currently only done for 2 architectures: Cortex-M (ARMv7) and
ARC. But when another architecture tries to use the IRQ vector table,
the linker complains about that. For example:
Linking C executable zephyr/zephyr.elf
riscv64-zephyr-elf/bin/ld.bfd: warning: orphan section
`.gnu.linkonce.irq_vector_table' from
`zephyr/CMakeFiles/zephyr_final.dir/isr_tables.c.obj' being placed in
section `.gnu.linkonce.irq_vector_table'
In this patch we introduce a new CONFIG_ARCH_IRQ_VECTOR_TABLE_ALIGN to
support the architectures requiring a special alignment for the IRQ
vector table and we also introduce a way to automatically place the IRQ
vector table in place in the same way it is done for the ISR software
table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Expose the Xtenesa CCOUNT timing register (the lowest level CPU cycle
counter) using the arch_timing_*() API.
This is the simplest possible way to get this working. Future work
might focus on moving the rate configuration into devicetree in a
standard way, integrating with the platform clock driver on intel_adsp
such that the reported cycle rate tracks runtime changes (though IIRC
this is not a SOF requirement), and adding better test coverage to the
timing layer, which right now isn't exercised anywhere but in
benchmarks.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
V7-A also supports TPIDRURO, so go ahead and use that for TLS, enabling
thread local storage for the other ARM architectures.
Add __aeabi_read_tp function in case code was compiled to use that.
Signed-off-by: Keith Packard <keithp@keithp.com>
Control the usage of semihosting with a dedicated symbol, instead of
implying semihosting from the usage of `SEMIHOST_CONSOLE`. This allows
semihosting to be used without the semihost console.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Moving this option to the subdirectory for boards might make it easier
to find, and will keep it next to some other board-related Kconfig
options set in the same file.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
The move to arch_switch() is a prerequisite for SMP support.
Make it optimal without the need for an ECALL roundtrip on every
context switch. Performance numbers from tests/benchmarks/sched:
Before:
unpend 107 ready 102 switch 188 pend 218 tot 615 (avg 615)
After:
unpend 107 ready 102 switch 170 pend 217 tot 596 (avg 595)
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This reverts commit be28de692c.
The purpose of this commit will be reintroduced later on top of
a cleaner codebase.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
According to Kconfig guidelines, boolean prompts must not start with
"Enable...". The following command has been used to automate the changes
in this patch:
sed -i "s/bool \"[Ee]nables\? \(\w\)/bool \"\U\1/g" **/Kconfig*
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Use CLINT to send interrupts to another CPU. SMP support is kinda
incomplete without it.
This patch only enables it for riscv-privilege platforms - specifically,
"virt" one.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Enable `arch_switch()` as preparation for SMP support. This patch
doesn't try to keep support for old style context swap - only switch
based swap is supported, to keep things simple.
A fair amount of refactoring was done in this patch, specially regarding
the code that decides what to do about the ISR. In RISC-V, ECALL
instructions are used to signalize several events, such as user space
system calls, forced syscall, IRQ offload, return from syscall and
context switch. All those handled by the ISR - which also handles
interrupts. After refactor, this "dispatching" step is done at the
beginning of ISR (just after saving generic registers).
As with other platforms, the thread object itself is used as the thread
"switch handle" for the context swap.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Change the CPU_CORTEX_R kconfig option to CPU_AARCH32_CORTEX_R to
distinguish the armv7 version from the armv8 version of Cortex-R.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
The x86 and xtensa implementations of irq_offload() invoke synchronous
interrupts on the local CPU, and are therefore safe to use from within
an interrupt context. This is a cheap and portable way to exercise
nested interrupts, which are otherwise highly platform-dependent to
test. Add a kconfig to signal the capability.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
MIPS (Microprocessor without Interlocked Pipelined Stages) is a
instruction set architecture (ISA) developed by MIPS Computer
Systems, now MIPS Technologies.
This commit provides MIPS architecture support to Zephyr. It is
compatible with the MIPS32 Release 1 specification.
Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
This moves CONFIG_MMU and its children from arch/Kconfig into
kernel/Kconfig. These are used to enable kernel support of MMU
so they should be under kernel/.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Give the choice a name so that the soc/board developers can change the
default selection in their Kconfig.*.
For example:
choice CACHE_TYPE
default HAS_EXTERNAL_CACHE
endchoice
There was a similar issue had beed discussed:
https://github.com/zephyrproject-rtos/zephyr/issues/6948
Signed-off-by: Dylan Hung <dylan_hung@aspeedtech.com>
Change-Id: I07c3e78a5243b30912f8e44fa3181fa163016318
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.
Rework the cache code introducing support for external cache
controllers.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.
CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.
Also extends this to gather per-thread demand paging
statistics.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Xtensa cores are highly configurable so each SoC may not have
the needed instructions for the hardware assisted atomic
operations. So instead of selecting the arch-specific atomic
operations kconfig, do a "imply" instead. So SoC or board
configs can disable this.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Split ARM and ARM64 architectures.
Details:
- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
(arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
boards/bcm_vk/viper directory
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The fatal log now contains
- Trap type in human readable representation
- Integer registers visible to the program when trap was taken
- Special register values such as PC and PSR
- Backtrace with PC and SP
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, then all the above is
logged. If not, only the special registers are logged.
The format is inspired by the GRMON debug monitor and TSIM simulator.
A quick guide on how to use the values is in fatal.c.
It now looks like this:
E: tt = 0x02, illegal_instruction
E:
E: INS LOCALS OUTS GLOBALS
E: 0: 00000000 f3900fc0 40007c50 00000000
E: 1: 00000000 40004bf0 40008d30 40008c00
E: 2: 00000000 40004bf4 40008000 00000003
E: 3: 40009158 00000000 40009000 00000002
E: 4: 40008fa8 40003c00 40008fa8 00000008
E: 5: 40009000 f3400fc0 00000000 00000080
E: 6: 4000a1f8 40000050 4000a190 00000000
E: 7: 40002308 00000000 40001fb8 000000c1
E:
E: psr: f30000c7 wim: 00000008 tbr: 40000020 y: 00000000
E: pc: 4000a1f4 npc: 4000a1f8
E:
E: pc sp
E: #0 4000a1f4 4000a190
E: #1 40002308 4000a1f8
E: #2 40003b24 4000a258
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.
This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.
With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
There actually is no need for a separate kconfig here, as
the kernel VM address and SRAM address can be used to figure
out if the kernel is linked in virtual address space.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions. That's needlessly slow, given that the low
level primitives are a two-instruction sequence. Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.
There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap. That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).
Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.
This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.
In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>