Instead of performing a set of relative address comparisons using
pointers of type 'uint8_t *', we leverage the existing IN_RANGE()
macro and perform the comparisons with 'uintptr_t'.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Add support for mutable devices. Mutable devices are those which
can be modified after declaration, in-place, in kernel mode.
In order for a device to be mutable, the following must be true
* `CONFIG_DEVICE_MUTABLE` must be y-selected
* the Devicetree bindings for the device must include
`mutable.yaml`
* the Devicetree node must include the `zephyr,mutable` property
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
This moves including of demand_paging.h out of kernel/mm.h,
so that users of demand paging APIs must include the header
explicitly. Since the main user is kernel itself, we can be
more discipline about header inclusion.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a Kconfig option to tell whether or not using thread
local storage to store current thread.
The function using it can be called from ISR and using
TLS variables in this context may (should ???) not be
allowed
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This separates demand paging related headers into its own file
instead of being stuffed inside the main kernel memory
management header file.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.
Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Some platforms already have .bss section zeroed-out externally before the
Zephyr initialization and there is no sence to zero it out the second time
from the SW.
Such boot-time optimization could be critical e.g. for RTL Simulation.
Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
Extends the concept of halting a thread from just aborting a thread
to both aborting and suspending a thread.
Part of this involves updating k_thread_suspend() to operate in a
similar fashion to that of k_thread_abort().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Extracts the essential thread synchronization logic when aborting
a thread from z_thread_abort() and moves it to its own routine
called z_thread_halt().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The routine halt_thread() acts nearly identical to end_thread()
except that instead of only halting the thread if the _THREAD_DEAD
state bit is not set, it will halt it if bit specified by the
parameter new_state is not set (which is always _THREAD_DEAD).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The halt queue will be used to identify threads that are waiting
for a thread on another CPU to finish suspending.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Basic spinlock implementation is based on single
atomic variable and doesn't guarantee locking fairness
across multiple CPUs. It's even possible that single CPU
will win the contention every time which will result
in a live-lock.
Ticket spinlocks provide a FIFO order of lock aquisition
which resolves such unfairness issue at the cost of slightly
increased memory footprint.
Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_thread_name_get has inconsistent signature. In the function
declaration it uses k_tid_t but in the implementation it is using
struct k_thread *. Change the implementation to use k_tid_t.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
virt_page_phys_get can be called with phy parameter NULL when
the intention is just checking if a virtual address is mapped.
This function is generally overwritten by a an arch API that checks if
phys is null before using it but this default implementation doesn't.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Moving the Zephyr specific config options from
modules/hostap/Kconfig to corresponding Kconfig where the
option is specified.
Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
Thread userspace local data is to be used with storing errno per
thread without thread local storage support. However, if the C
library has native errno support, there is no need to enable
thread userspace local data to store errno per thread. Therefore,
amend the default for CONFIG_THREAD_USERSPACE_LOCAL_DATA so that
it is not enabled if the C library has native errno support.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The documentation suggests that k_timer_start can be invoked from ISR
and preemptive contexts, however, an assertion failure occurs if one
k_timer_start call preempts another for the same timer instance. This
commit mitigates the issue by implementing a spinlock throughout the
k_timer_start function, ensuring thread-safety.
Fixes: #62908
Signed-off-by: Pedro Sousa <sousapedro596@gmail.com>
Sometimes the generic address range checker is not adequate
(think Xtensa cached/uncached pointers). This provides a way
to implement custom memory range checkers for those
situations. When enabled, sys_mm_is_phys_addr_in_range()
and sys_mm_is_virt_addr_in_range() must be implemented.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The early random get function was making many wrong assumptions
about random subsys and entropy drivers. First, it was assuming
that entropy_get_entropy() would be ISR safe, that is not right,
the driver has an ISR safe callback and if it is not implemented
or not working it is not ok using the other callback.
Second, the fallback to the random subsys is even more problematic
since they can use kernel services to protect internal states and be
thread-safe.
Another incorrect thing in this function was the guard around it.
It was needed by features like stack randomization and stack canaries,
and not when those conditions were match. Just remove it and in case
it is not needed the linker will take care of it.
The drawback of this change is that in the absence of an entropy
generator with support to be called from ISR the randomness is very
weak.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Allow targets come up with their own early random generator
since the default can be NOT so random due constraints.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Rename z_early_boot_rand_get with z_early_rand_get to get consistent
with other early functions.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Platforms that determine their basic timer frequency at runtime instead of
build time cannot compute thread initialization timeouts during
compilation.
Switch back to storing the init_delay value in milliseconds and perform the
conversion to a k_timeout_t at runtime.
Signed-off-by: Keith Packard <keithp@keithp.com>
Instead of adding every possible subsystem which places variables in the C
library memory partition in libc-hooks.h, place those conditions in the
related Kconfig files and simplify the libc-hooks.h to just looking at
CONFIG_NEED_LIBC_MEM_PARTITION.
Signed-off-by: Keith Packard <keithp@keithp.com>
rand32.h does not make much sense, since the random subsystem
provides more APIs than just getting a random 32 bits value.
Rename it to random.h and get consistently with other
subsystems.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The wording on deprecating arch_kernel_init() in favor of prep_c()
has never been materialized. Various architectures are using it to
perform initialization. So remove the wording.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Assert that the handler of a work is not NULL when submitting
it to the queue. This allows early detection of the
code that is submitting a non-NULL work with NULL handler to
the work queue (where it happens), rather than right before the
work item get executed in the queue (when it happens).
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Zephyr's code base uses MP_MAX_NUM_CPUS to
know how many cores exists in the target. It is
also expected that both symbols MP_MAX_NUM_CPUS
and MP_NUM_CPUS have the same value, so lets
just use MP_MAX_NUM_CPUS and simplify it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Integrates object core statistics framework into the following
kernel objects:
sys_mem_blocks, k_mem_slab
threads, _cpu, z_kernel
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Refactors CPU usage (thread runtime stats) to make it easier to
integrate with the object core statistics framework.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Rearranges the k_mem_slab fields so that information that describes
how much of the memory slab is used is co-located. This will allow
easier of its statistics into the object core statistics reporting
framework.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
We don't need re-implement a function to get the current cpu.
Simply use _current_cpu that even contains additional sanity checks.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Running inside kernel we can use _current instead of
k_current_get that can lead to additional function call
checks.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This adds a function k_object_is_valid() to check if a kernel
object exists, of certain type, and has been initialized.
This replaces the same (or very similar) code that has been
copied from kernel into the network subsystem.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The original idea of z_current_get() was to be the counterpart
of k_current_get() when thread local variable for current has
not been initialized if TLS is enabled, otherwise they are
the same function. Now since z_current_get() is being used
outside of core kernel, rename it under kernel namespace so
other subsystem can conceptually use them too.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Storing this value in milliseconds rather than using k_timeout_t requires
the system to perform division at runtime to convert types. This pulls in
the 64-bit soft division code on platforms without hardware for this.
Perform the conversion at build time instead by using the runtime time
directly.
The init_delay field was moved within the _static_thread_data structure to
avoid introducing a hole for alignment on 32-bit systems when using 64-bit
timeouts.
Use SYS_TIMEOUT_MS instead of K_MSEC so that the initial delay can be set
to forever.
Signed-off-by: Keith Packard <keithp@keithp.com>
Previously we limit maximum number of CPU cores to 5, now be
bumping this restriction so we can use 12 cores.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
The signal_poll_event function was previously called without the poll
lock held. This created a race condition between a thread calling k_poll
to wait for an event and another thread signalling for this same event.
This resulted in the waiting thread to stay pending and the handle to it
getting removed from the notifyq, meaning it couldn't get woken up
again.
Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
This internal kernel API is misplaced in a public kernel header. Just
make it available to the code using it in the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The _EXPIRED macro is no longer necessary. It is a relic of an older
timeout processing algorithm from several years ago.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This is a private kernel header with private kernel APIs, it should not
be exposed in the public zephyr include directory.
Once sample remains to be fixed (metairq_dispatch), which currently uses
private APIs from that header, it should not be the case.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This header does not expose any public APIs, so move it under
kernel/include and change files including it.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add a missing assert argument, fixes:
zephyrproject/zephyr/kernel/dynamic.c: In function 'dyn_cb':
zephyrproject/zephyr/include/zephyr/sys/__assert.h:44:52: warning:
format '%p' expects a matching 'void *' argument [-Wformat=]
That started to break the build since:
d7846de548 assert: check format arguments for correctness
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
Add an assert to ensure the pointer provided by the user points to one
of the available blocks in the slab.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Modify the signature of the k_mem_slab_free() function with a new one,
replacing the old void **mem with void *mem as a parameter.
The following function:
void k_mem_slab_free(struct k_mem_slab *slab, void **mem);
has the wrong signature. mem is only used as a regular pointer, so there
is no need to use a double-pointer. The correct signature should be:
void k_mem_slab_free(struct k_mem_slab *slab, void *mem);
The issue with the current signature, although functional, is that it is
extremely confusing. I myself, a veteran Zephyr developer, was confused
by this parameter when looking at it recently.
All in-tree uses of the function have been adapted.
Fixes#61888.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Combining Meta IRQs with cooperative threads requires extra care to
return to pre-empted cooperative threads when returning from a Meta IRQ.
This is only needed when there are cooperative threads that are not also
Meta IRQs. This PR saves some space & time when the number of Meta IRQs
is equal to the number of available cooperative threads.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
CONFIG_COVERAGE has been incorrectly used to
change other kconfig options (stack sizes, etc)
code defaults, as well as some samples behaviour,
which should not have dependend on it.
Instead those should have depended on COVERAGE_GCOV,
which, being the one which adds special code and
temporary RAM storage for embedded targets,
require changes to many features.
When building for the native targets, all this was
unnecessary.
=> Fix the dependency.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
This enables -Wshadow to warn about shadow variables on
in tree code under arch/, boards/, drivers/, kernel/,
lib/, soc/, and subsys/.
Note that this does not enable it globally because
out-of-tree modules will probably take some time to fix
(or not at all depending on the project), and it would be
great to avoid introduction of any new shadow variables
in the meantime.
Also note that this tries to be done in a minimally
invasive way so it is easy to revert when we enable
-Wshadow globally. Source files under modules/, samples/
and tests/ are currently excluded because there does not
seem to be a trivial way to add -Wshadow there without
going through all CMakeLists.txt to add the option
(as there are 1000+ files to change).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows for further (out of tree) customisation of the boot
banner version string when devices boot.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
When `CONFIG_FPU_SHARING` is enabled each `k_thread` struct has a saved
floating point context (`saved_fp_context`). During a context switch, the
current FPU owner's (`_current_cpu->arch.fpu_owner`) registers are saved
to its `saved_fp_context`, and the destination threads FPU registers are
loaded from its `saved_fp_context`.
When a thread ends, it does not release ownership of the FPU
(`_current_cpu->arch.fpu_owner`). This is problematic if the `k_thread`
struct was allocated on the stack. The next context switch will save the
FPU registers into `k_thread -> saved_fp_context` which may now be out of
scope. This will likely (but not always) result in a crash.
Adding `arch_float_disable(thread);` when a thread ends disables
preservation of floating point context information, fixing this issue
Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
When CONFIG_KERNEL_DIRECT_MAP enabled, the region to be mapped
or unmapped can be outside of the virtual memory space, wholly
within it, or overlap partially. Additional processing is
needed to make sure we only manipulate the bits within
the bitmap, in other words, only the pages represented by
the bitmap.
Fixes#59549
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add new option to use thread local storage for stack
canaries. This makes harder to find the canaries location
and value. This is made optional because there is
a performance and size penalty when using it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
In commit d537267f, the check on thread abortion was moved from next_up
to z_get_next_switch_handle. However, next_up is also called from
z_swap_next_thread, so the check on thread abortion is now missing there.
This sometimes caused the thread to be stuck in ABORTING + PENDING state
during the test_smp_switch_torture in test/kernel/smp
To avoid such cases in the future, it is worth leaving the check in next_up
Signed-off-by: Vadim Shakirov <vadim.shakirov@syntacore.com>
This is meant as a substitute for sys_clock_timeout_end_calc()
Current sys_clock_timeout_end_calc() usage opens up many bug
possibilities due to the actual timeout evaluation's open-coded nature.
Issue ##50611 is one example.
- Some users store the returned value in a signed variable, others in
an unsigned one, making the comparison with UINT64_MAX (corresponding
to K_FOREVER) wrong in the signed case.
- Some users compute the difference and store that in a signed variable
to compare against 0 which still doesn't work with K_FOREVER. And when
this difference is used as a timeout argument then the K_FOREVER
nature of the timeout is lost.
- Some users complexify their code by special-casing K_NO_WAIT and
K_FOREVER inline which is bad for both code readability and binary
size.
Let's introduce a better abstraction to deal with absolute timepoints
with an opaque type to be used with a well-defined API.
The word "timeout" was avoided in the naming on purpose as the timeout
namespace is quite crowded already and it is preferable to make a
distinction between relative time periods (timeouts) and absolute time
values (timepoints).
A few stacks are also adjusted as they were too tight on X86.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
With some of the recent work to disable unnecessary system
calls, there is a scenario where `z_impl_k_thread_stack_free()`
is not defined and an undefined symbol error occurs.
Safety was very concerned that dynamic thread stack code might
touch other code that does not malloc, so add a separate file
for the stack alloc and free stubs.
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
This allows for builds with CONFIG_SYS_CLOCK_EXISTS=n in which case
busy waits are achieved with a crude CPU loop. If ever accuracy is
needed even with such a configuration then implementing arch_busy_wait()
should be considered.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Since the rbtree is using as list because we no longer
can assume that the object pointer is the address of the
data field in the dynamic object struct, lets just use
the already existent dlist for tracking dynamic kernel
objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Fix the preference allocation logic. If pool is preferred but POOL_SIZE
is 0 or pool allocation fails, it fallbacks to heap allocation if it
is enabled.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add support for dynamic thread stack objects. A new container
for this kernel object was added to avoid its alignment constraint
to all dynamic objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a new API to dynamically allocate kernel objects that allow
passing an arbitrary size. This new API allows to allocate dynamic
thread stack.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Update the return value of functions that modify the internal event
state from `void` to `uint32_t`, so that calling code can determine
whether the event was already in a given state, or if the call modified
it.
This simplifies the usage of `struct k_event` as an alternative to
`atomic_t` that users can block on.
Implements #57216
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.
This change documents and explains the rationale of this distinction.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early. That's not sufficient. The thread state is mutated
from the thread itself on its exit path. It may still be running!
Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.
There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.
Fixes#58116
Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation. We have an API for
that now, so put them in:
* The code immediately before arch_switch() needs a write barrier to
ensure that thread state written by the scheduler is seen to happen
before the outgoing thread is flagged with a valid switch handle.
* The loop in z_sched_switch_spin() needs a read barrier at the end,
to make sure the calling context doesn't load state from before the
other CPU stored the switch handle.
Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.
Signed-off-by: Andy Ross <andyross@google.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().
Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.
The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context. For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.
But this check was being done AFTER the next candidate thread was
selected from the run queue. Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.
Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).
Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path). Simple fix, subtle bug.
Fixes#58040
Signed-off-by: Andy Ross <andyross@google.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.
Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).
This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.
Additionally, add asserts to make sure this can't happen again.
Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When building sample.minimal.mt-no-preempt-no-timers.arm on arm-clang
we get a link error as z_pm_save_idle_exit expects sys_clock_idle_exit
to be defined.
However the sample sets CONFIG_SYS_CLOCK_EXISTS=n so
sys_clock_idle_exit() will not be defined by any driver. So add proper
ifdef protection in z_pm_save_idle_exit to fix this.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
When a semaphore is given and there is no thread waiting
for it, do not unconditionally perform a reschedule.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some devices do not need to perform any initialization, so allow the
init function to be NULL. In this case, the initialization code will
just mark the device as initialized, i.e. ready.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc.
Most of these changes were automated using coccinelle with the following
script:
@@
@@
- void
+ int
main(...) {
...
- return;
+ return 0;
...
}
Approximately 40 files had to be edited by hand as coccinelle was unable to
fix them.
Signed-off-by: Keith Packard <keithp@keithp.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc
Signed-off-by: Keith Packard <keithp@keithp.com>
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Add check to ensure that CONFIG_MP_NUM_CPUS and CONFIG_MP_MAX_NUM_CPUS
are set the same. This will at least cause a build issue for out of
tree users.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.
Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.
In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the `zephyr,pm-device-runtime-auto` flag to `pm.yaml` and
`struct pm_device`.
This flag is intended to signify to the boot system that device runtime
PM should be automatically enabled on the device after the init function
has run.
Only run `pm_device_runtime_auto_enable` function on a device if
initialisation succeeded. This prevents actions being run on devices
that are not ready.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Fixes race condition for k_event_post_internal() in an
SMP environment while walking the waitq. Uses z_sched_waitq_walk()
to safely walk the waitq by using a sched_spinlock.
It should be noted that since walking the wait queue is an
operation of indeterminant length, there exists the possibility
that the sched_spinlock (which is a highly used and contended-for
lock) may be locked for an indeterminant amount of time. However,
it is expected that few threads will be waiting on any given kernel
event object, which should ameliorate this risk.
Fixes#54317
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU. There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.
Signed-off-by: Andy Ross <andyross@google.com>