Updates z_get_next_switch_handle() to set the new thread's base.cpu
value as it is done in do_swap(). This helps to ensure that the
last CPU on which the thread executed remains current.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This reverts commit 61c70626a5.
This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
This reverts commit 20611f13ca.
This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
This reverts commit 02b24911f7.
This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread. Move it to the end, and add a barrier for
correctness.
Signed-off-by: Andy Ross <andyross@google.com>
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.
The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch(). Leave this in a separate commit for bisection
purposes, but the risk seems very low.
Signed-off-by: Andy Ross <andyross@google.com>
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).
But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening. The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct! The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.
Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this. There is some non-trivial
memory cost to that; thread structs on many architectures are large.
Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.
Fixes#64646
Signed-off-by: Andy Ross <andyross@google.com>
Big change is to factor out a thread_halt_spin() utility to manage the
core complexity of this code: the situation where an ISR is asked to
abort a thread already running on another SMP CPU.
With that gone, things can be cleaned up quite a bit. Remove early
returns, most of the "#if CONFIG_SMP" usage was superfluous and will
optimize out, unify and clean up the comments, etc...
No behavioral changes (hopefully), just refactoring.
Signed-off-by: Andy Ross <andyross@google.com>
After the move to C files we got some drop in the performance when
running latency_measure. This patch declares the priority queue
functions as static inlines with minor optimizations.
The result for one metric (on qemu):
3.6 and before the anything was changed:
Get data from LIFO (w/ ctx switch): 13087 ns
after original change (46484da502):
Get data from LIFO (w/ ctx switch): 13663 ns
with this change:
Get data from LIFO (w/ ctx switch): 12543 ns
So overall, a net gain of ~ 500ns that can be seen across the board on many
of the metrics.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This adds the mechanism to do cleanup after k_thread_abort()
is called with the current thread. This is mainly used for
cleaning up things when the thread cannot be running, e.g.,
cleanup the thread stack.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Do not build threading support when CONFIG_MULTITHREADING=n is set and
move needed calls to a new file with the changes needed instead of the
ifdef party in sched.c
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.
Signed-off-by: Simon Hein <Shein@baumer.com>
k_thread_deadline_set() would modify the thread's deadline and then,
if it was in the run queue, requeue it to put it at the right spot.
Sounds right, right?
It's wrong. The deadline field is part of the thread priority, so
this results in a mis-ordered list. For dlist backends, that's benign
as the removal works anyway, but if CONFIG_SCHED_SCALABLE=y we've now
broken the sorting order of an in-tree item and corrupted the rbtree!
Fixes#69935
Signed-off-by: Andy Ross <andyross@google.com>
Rename private function to make it clear what priority we are setting
and to be consistent across the code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Avoid single characker variables that renders code unreadable and might
cause conflicts in maing, similar to t for both timeout and thread in
some places.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Use thread wherever it makes sense, using 't' in some places can get
confused with 't' used for timeouts for example.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move thread monitor related functions, not enabled in most cases outside
of thread.c and cleanup headers.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This function is only being used by a test, so instead of reimplementing
a syscall in the test, provide a Kconfig option to provide the
functionality that only works with tests and remove some of the
duplication and extra code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
We shouldn't be calling hooks from optional and upper layer subsystems
in the kernel, instead, just call the hook to set thread status in the
API where it is needed.
This now clears related bit in cmsis thread status bitarray when
terminating a thread in the cmsis rtos v1 layer directly and not in the
kenrel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
In an effort to cleanup sched.c, move sections of code that can be
compiled in based on options into own files. CPU mask here is managed by
a kconfig and is not widely used (SMP affinity on multicore systems).
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The functions to manipulate the essential flag indeed operate on
threads, but they are misplaced in the thread implementation file. Put
them alongside other routines setting other thread flags and cleanup
headers a bit.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
clean up headers under include/ and move handling of priority queue to
own file/header.
No need for the header include/zephyr/kernel/internal/sched_priq.h
anymore. Move the relevant structures where they are being used in
kernel_structs.h.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit does two things to k_wakeup():
1. It locks the scheduler before marking the thread as not suspended.
As the the clearing of the _THREAD_SUSPENDED bit is not atomic, this
helps ensure that neither another thread nor ISR interrupts this
action (resulting in a corrupted thread_state).
2. The call to flag_ipi() has been removed as it is already being
made within ready_thread().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The current z_tick_sleep return directly when building kernel for Single
Thread model. This reorganize the code to use k_busy_wait() to be time
coherent since subsystems may depend on it.
In the case of a K_FOREVER timeout is selected the Single Thread the
implementation will invoke k_cpu_idle() and the system will wait for
an interrupt saving power.
Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Updates both the k_sleep() and k_usleep() return values so that if
the thread was woken up prematurely, they will return the time left
to sleep rounded up to the nearest millisecond (for k_sleep) or
microsecond (for k_usleep) instead of rounding down. This removes
ambiguity should there be a non-zero number of remaining ticks
that correlate to a time of less than 1 millisecond or 1 microsecond.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Extends the concept of halting a thread from just aborting a thread
to both aborting and suspending a thread.
Part of this involves updating k_thread_suspend() to operate in a
similar fashion to that of k_thread_abort().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Extracts the essential thread synchronization logic when aborting
a thread from z_thread_abort() and moves it to its own routine
called z_thread_halt().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The routine halt_thread() acts nearly identical to end_thread()
except that instead of only halting the thread if the _THREAD_DEAD
state bit is not set, it will halt it if bit specified by the
parameter new_state is not set (which is always _THREAD_DEAD).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Integrates object core statistics framework into the following
kernel objects:
sys_mem_blocks, k_mem_slab
threads, _cpu, z_kernel
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>