unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
/*
|
2018-05-03 23:51:49 +02:00
|
|
|
* Copyright (c) 2018 Intel Corporation
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*
|
2017-01-19 02:01:01 +01:00
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*/
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/kernel.h>
|
2016-10-13 16:31:48 +02:00
|
|
|
#include <ksched.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/spinlock.h>
|
2023-08-29 19:03:12 +02:00
|
|
|
#include <wait_q.h>
|
2024-02-26 23:03:35 +01:00
|
|
|
#include <kthread.h>
|
2024-02-26 17:30:49 +01:00
|
|
|
#include <priority_q.h>
|
2018-01-26 00:24:15 +01:00
|
|
|
#include <kswap.h>
|
2024-03-08 13:51:01 +01:00
|
|
|
#include <ipi.h>
|
2018-05-03 23:51:49 +02:00
|
|
|
#include <kernel_arch_func.h>
|
2023-09-27 00:46:01 +02:00
|
|
|
#include <zephyr/internal/syscall_handler.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/drivers/timer/system_timer.h>
|
2018-11-22 01:22:15 +01:00
|
|
|
#include <stdbool.h>
|
2019-09-22 02:54:37 +02:00
|
|
|
#include <kernel_internal.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/logging/log.h>
|
|
|
|
#include <zephyr/sys/atomic.h>
|
|
|
|
#include <zephyr/sys/math_extras.h>
|
|
|
|
#include <zephyr/timing/timing.h>
|
2023-04-11 15:34:39 +02:00
|
|
|
#include <zephyr/sys/util.h>
|
2021-09-28 18:38:43 +02:00
|
|
|
|
2020-11-26 19:32:34 +01:00
|
|
|
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-03-08 13:51:01 +01:00
|
|
|
#if defined(CONFIG_SWAP_NONATOMIC) && defined(CONFIG_TIMESLICING)
|
|
|
|
extern struct k_thread *pending_current;
|
|
|
|
#endif
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
struct k_spinlock _sched_spinlock;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2021-06-01 09:44:45 +02:00
|
|
|
static void update_cache(int preempt_ok);
|
2023-08-14 20:06:52 +02:00
|
|
|
static void halt_thread(struct k_thread *thread, uint8_t new_state);
|
2023-08-14 20:22:05 +02:00
|
|
|
static void add_to_waitq_locked(struct k_thread *thread, _wait_q_t *wait_q);
|
2020-09-05 20:50:18 +02:00
|
|
|
|
2021-11-29 15:52:11 +01:00
|
|
|
|
2023-08-28 17:31:54 +02:00
|
|
|
BUILD_ASSERT(CONFIG_NUM_COOP_PRIORITIES >= CONFIG_NUM_METAIRQ_PRIORITIES,
|
|
|
|
"You need to provide at least as many CONFIG_NUM_COOP_PRIORITIES as "
|
|
|
|
"CONFIG_NUM_METAIRQ_PRIORITIES as Meta IRQs are just a special class of cooperative "
|
|
|
|
"threads.");
|
|
|
|
|
2021-03-01 18:19:57 +01:00
|
|
|
/*
|
|
|
|
* Return value same as e.g. memcmp
|
|
|
|
* > 0 -> thread 1 priority > thread 2 priority
|
|
|
|
* = 0 -> thread 1 priority == thread 2 priority
|
|
|
|
* < 0 -> thread 1 priority < thread 2 priority
|
|
|
|
* Do not rely on the actual value returned aside from the above.
|
|
|
|
* (Again, like memcmp.)
|
|
|
|
*/
|
|
|
|
int32_t z_sched_prio_cmp(struct k_thread *thread_1,
|
|
|
|
struct k_thread *thread_2)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2021-03-01 18:19:57 +01:00
|
|
|
/* `prio` is <32b, so the below cannot overflow. */
|
|
|
|
int32_t b1 = thread_1->base.prio;
|
|
|
|
int32_t b2 = thread_2->base.prio;
|
|
|
|
|
|
|
|
if (b1 != b2) {
|
|
|
|
return b2 - b1;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
2020-07-10 18:43:36 +02:00
|
|
|
/* If we assume all deadlines live within the same "half" of
|
|
|
|
* the 32 bit modulus space (this is a documented API rule),
|
2021-03-01 18:19:57 +01:00
|
|
|
* then the latest deadline in the queue minus the earliest is
|
2020-07-10 18:43:36 +02:00
|
|
|
* guaranteed to be (2's complement) non-negative. We can
|
|
|
|
* leverage that to compare the values without having to check
|
|
|
|
* the current time.
|
2018-05-15 20:06:25 +02:00
|
|
|
*/
|
2021-03-01 18:19:57 +01:00
|
|
|
uint32_t d1 = thread_1->base.prio_deadline;
|
|
|
|
uint32_t d2 = thread_2->base.prio_deadline;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2021-03-01 18:19:57 +01:00
|
|
|
if (d1 != d2) {
|
|
|
|
/* Sooner deadline means higher effective priority.
|
|
|
|
* Doing the calculation with unsigned types and casting
|
|
|
|
* to signed isn't perfect, but at least reduces this
|
|
|
|
* from UB on overflow to impdef.
|
|
|
|
*/
|
|
|
|
return (int32_t) (d2 - d1);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SCHED_DEADLINE */
|
2021-03-01 18:19:57 +01:00
|
|
|
return 0;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void *thread_runq(struct k_thread *thread)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
|
|
|
int cpu, m = thread->base.cpu_mask;
|
|
|
|
|
|
|
|
/* Edge case: it's legal per the API to "make runnable" a
|
|
|
|
* thread with all CPUs masked off (i.e. one that isn't
|
|
|
|
* actually runnable!). Sort of a wart in the API and maybe
|
|
|
|
* we should address this in docs/assertions instead to avoid
|
|
|
|
* the extra test.
|
|
|
|
*/
|
|
|
|
cpu = m == 0 ? 0 : u32_count_trailing_zeros(m);
|
|
|
|
|
|
|
|
return &_kernel.cpus[cpu].ready_q.runq;
|
|
|
|
#else
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(thread);
|
2021-09-24 19:57:39 +02:00
|
|
|
return &_kernel.ready_q.runq;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SCHED_CPU_MASK_PIN_ONLY */
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void *curr_cpu_runq(void)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
|
|
|
return &arch_curr_cpu()->ready_q.runq;
|
|
|
|
#else
|
|
|
|
return &_kernel.ready_q.runq;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SCHED_CPU_MASK_PIN_ONLY */
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void runq_add(struct k_thread *thread)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2024-04-11 17:59:07 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
_priq_run_add(thread_runq(thread), thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ALWAYS_INLINE void runq_remove(struct k_thread *thread)
|
|
|
|
{
|
2024-04-11 17:59:07 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
_priq_run_remove(thread_runq(thread), thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ALWAYS_INLINE struct k_thread *runq_best(void)
|
|
|
|
{
|
|
|
|
return _priq_run_best(curr_cpu_runq());
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
/* _current is never in the run queue until context switch on
|
|
|
|
* SMP configurations, see z_requeue_current()
|
|
|
|
*/
|
2024-02-27 15:49:07 +01:00
|
|
|
static inline bool should_queue_thread(struct k_thread *thread)
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
{
|
2024-02-27 15:49:07 +01:00
|
|
|
return !IS_ENABLED(CONFIG_SMP) || thread != _current;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
static ALWAYS_INLINE void queue_thread(struct k_thread *thread)
|
2021-02-07 22:03:09 +01:00
|
|
|
{
|
|
|
|
thread->base.thread_state |= _THREAD_QUEUED;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (should_queue_thread(thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_add(thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (thread == _current) {
|
|
|
|
/* add current to end of queue means "yield" */
|
|
|
|
_current_cpu->swap_ok = true;
|
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2021-02-07 22:03:09 +01:00
|
|
|
}
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
static ALWAYS_INLINE void dequeue_thread(struct k_thread *thread)
|
2021-02-07 22:03:09 +01:00
|
|
|
{
|
|
|
|
thread->base.thread_state &= ~_THREAD_QUEUED;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (should_queue_thread(thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_remove(thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Called out of z_swap() when CONFIG_SMP. The current thread can
|
|
|
|
* never live in the run queue until we are inexorably on the context
|
|
|
|
* switch path on SMP, otherwise there is a deadlock condition where a
|
|
|
|
* set of CPUs pick a cycle of threads to run and wait for them all to
|
|
|
|
* context switch forever.
|
|
|
|
*/
|
2024-02-27 15:49:07 +01:00
|
|
|
void z_requeue_current(struct k_thread *thread)
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
{
|
2024-02-27 15:49:07 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
runq_add(thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2021-02-07 22:03:09 +01:00
|
|
|
}
|
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
/* Return true if the thread is aborting, else false */
|
2021-02-20 00:32:19 +01:00
|
|
|
static inline bool is_aborting(struct k_thread *thread)
|
|
|
|
{
|
2021-03-29 16:03:49 +02:00
|
|
|
return (thread->base.thread_state & _THREAD_ABORTING) != 0U;
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
2023-08-14 22:41:05 +02:00
|
|
|
|
|
|
|
/* Return true if the thread is aborting or suspending, else false */
|
|
|
|
static inline bool is_halting(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
return (thread->base.thread_state &
|
|
|
|
(_THREAD_ABORTING | _THREAD_SUSPENDING)) != 0U;
|
|
|
|
}
|
2021-02-20 00:32:19 +01:00
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
/* Clear the halting bits (_THREAD_ABORTING and _THREAD_SUSPENDING) */
|
|
|
|
static inline void clear_halting(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
thread->base.thread_state &= ~(_THREAD_ABORTING | _THREAD_SUSPENDING);
|
|
|
|
}
|
|
|
|
|
2019-01-28 18:36:36 +01:00
|
|
|
static ALWAYS_INLINE struct k_thread *next_up(void)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
2023-07-24 14:42:52 +02:00
|
|
|
#ifdef CONFIG_SMP
|
2023-08-14 22:41:05 +02:00
|
|
|
if (is_halting(_current)) {
|
|
|
|
halt_thread(_current, is_aborting(_current) ?
|
|
|
|
_THREAD_DEAD : _THREAD_SUSPENDED);
|
2023-07-24 14:42:52 +02:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2023-07-24 14:42:52 +02:00
|
|
|
|
2021-09-24 03:44:40 +02:00
|
|
|
struct k_thread *thread = runq_best();
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2023-08-28 17:31:54 +02:00
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && \
|
|
|
|
(CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES)
|
2019-11-13 18:41:52 +01:00
|
|
|
/* MetaIRQs must always attempt to return back to a
|
|
|
|
* cooperative thread they preempted and not whatever happens
|
|
|
|
* to be highest priority now. The cooperative thread was
|
|
|
|
* promised it wouldn't be preempted (by non-metairq threads)!
|
|
|
|
*/
|
|
|
|
struct k_thread *mirqp = _current_cpu->metairq_preempted;
|
|
|
|
|
2024-03-28 12:15:04 +01:00
|
|
|
if (mirqp != NULL && (thread == NULL || !thread_is_metairq(thread))) {
|
2019-11-13 18:41:52 +01:00
|
|
|
if (!z_is_thread_prevented_from_running(mirqp)) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = mirqp;
|
2019-11-13 18:41:52 +01:00
|
|
|
} else {
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2024-03-08 12:00:10 +01:00
|
|
|
/* CONFIG_NUM_METAIRQ_PRIORITIES > 0 &&
|
|
|
|
* CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES
|
|
|
|
*/
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
|
|
|
/* In uniprocessor mode, we can leave the current thread in
|
|
|
|
* the queue (actually we have to, otherwise the assembly
|
|
|
|
* context switch code for all architectures would be
|
2019-03-08 22:19:05 +01:00
|
|
|
* responsible for putting it back in z_swap and ISR return!),
|
2018-05-03 23:51:49 +02:00
|
|
|
* which makes this choice simple.
|
|
|
|
*/
|
2021-03-29 23:13:47 +02:00
|
|
|
return (thread != NULL) ? thread : _current_cpu->idle_thread;
|
2018-04-11 23:52:47 +02:00
|
|
|
#else
|
2018-05-03 23:51:49 +02:00
|
|
|
/* Under SMP, the "cache" mechanism for selecting the next
|
|
|
|
* thread doesn't work, so we have more work to do to test
|
2019-11-13 18:41:52 +01:00
|
|
|
* _current against the best choice from the queue. Here, the
|
|
|
|
* thread selected above represents "the best thread that is
|
|
|
|
* not current".
|
2018-05-30 20:23:02 +02:00
|
|
|
*
|
|
|
|
* Subtle note on "queued": in SMP mode, _current does not
|
|
|
|
* live in the queue, so this isn't exactly the same thing as
|
|
|
|
* "ready", it means "is _current already added back to the
|
|
|
|
* queue such that we don't want to re-add it".
|
2018-05-03 23:51:49 +02:00
|
|
|
*/
|
2022-07-19 22:30:17 +02:00
|
|
|
bool queued = z_is_thread_queued(_current);
|
|
|
|
bool active = !z_is_thread_prevented_from_running(_current);
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread == NULL) {
|
|
|
|
thread = _current_cpu->idle_thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
if (active) {
|
2021-03-01 18:19:57 +01:00
|
|
|
int32_t cmp = z_sched_prio_cmp(_current, thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
|
|
|
|
/* Ties only switch if state says we yielded */
|
2021-03-01 18:19:57 +01:00
|
|
|
if ((cmp > 0) || ((cmp == 0) && !_current_cpu->swap_ok)) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (!should_preempt(thread, _current_cpu->swap_ok)) {
|
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
}
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Put _current back into the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current && active &&
|
|
|
|
!z_is_idle_thread_object(_current) && !queued) {
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Take the new _current out of the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
_current_cpu->swap_ok = false;
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2024-03-08 13:51:01 +01:00
|
|
|
void move_thread_to_end_of_prio_q(struct k_thread *thread)
|
2020-09-05 20:50:18 +02:00
|
|
|
{
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2020-09-05 20:50:18 +02:00
|
|
|
}
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2020-09-05 20:50:18 +02:00
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Track cooperative threads preempted by metairqs so we can return to
|
|
|
|
* them specifically. Called at the moment a new thread has been
|
|
|
|
* selected to run.
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
static void update_metairq_preempt(struct k_thread *thread)
|
2019-11-13 18:41:52 +01:00
|
|
|
{
|
2023-08-28 17:31:54 +02:00
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && \
|
|
|
|
(CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES)
|
2024-03-28 12:15:04 +01:00
|
|
|
if (thread_is_metairq(thread) && !thread_is_metairq(_current) &&
|
2024-03-28 12:20:51 +01:00
|
|
|
!thread_is_preemptible(_current)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Record new preemption */
|
|
|
|
_current_cpu->metairq_preempted = _current;
|
2024-03-28 12:15:04 +01:00
|
|
|
} else if (!thread_is_metairq(thread) && !z_is_idle_thread_object(thread)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Returning from existing preemption */
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
2023-08-21 15:30:26 +02:00
|
|
|
#else
|
|
|
|
ARG_UNUSED(thread);
|
2019-11-13 18:41:52 +01:00
|
|
|
#endif
|
2024-03-08 12:00:10 +01:00
|
|
|
/* CONFIG_NUM_METAIRQ_PRIORITIES > 0 &&
|
|
|
|
* CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES
|
|
|
|
*/
|
2019-11-13 18:41:52 +01:00
|
|
|
}
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
static void update_cache(int preempt_ok)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = next_up();
|
2018-05-21 20:48:35 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (should_preempt(thread, preempt_ok)) {
|
2019-08-17 06:29:26 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current) {
|
2021-12-01 03:26:26 +01:00
|
|
|
z_reset_time_slice(thread);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_TIMESLICING */
|
2019-12-19 14:19:45 +01:00
|
|
|
update_metairq_preempt(thread);
|
|
|
|
_kernel.ready_q.cache = thread;
|
2018-05-30 20:23:02 +02:00
|
|
|
} else {
|
|
|
|
_kernel.ready_q.cache = _current;
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
|
|
|
#else
|
|
|
|
/* The way this works is that the CPU record keeps its
|
|
|
|
* "cooperative swapping is OK" flag until the next reschedule
|
|
|
|
* call or context switch. It doesn't need to be tracked per
|
|
|
|
* thread because if the thread gets preempted for whatever
|
|
|
|
* reason the scheduler will make the same decision anyway.
|
|
|
|
*/
|
|
|
|
_current_cpu->swap_ok = preempt_ok;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-02-20 00:24:24 +01:00
|
|
|
static bool thread_active_elsewhere(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
/* True if the thread is currently running on another CPU.
|
|
|
|
* There are more scalable designs to answer this question in
|
|
|
|
* constant time, but this is fine for now.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int currcpu = _current_cpu->id;
|
|
|
|
|
2022-10-18 16:45:13 +02:00
|
|
|
unsigned int num_cpus = arch_num_cpus();
|
|
|
|
|
|
|
|
for (int i = 0; i < num_cpus; i++) {
|
2021-02-20 00:24:24 +01:00
|
|
|
if ((i != currcpu) &&
|
|
|
|
(_kernel.cpus[i].current == thread)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(thread);
|
2021-02-20 00:24:24 +01:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
static void ready_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2020-12-07 19:15:42 +01:00
|
|
|
#ifdef CONFIG_KERNEL_COHERENCE
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
__ASSERT_NO_MSG(arch_mem_coherent(thread));
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_KERNEL_COHERENCE */
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
|
2020-10-17 02:00:17 +02:00
|
|
|
/* If thread is queued already, do not try and added it to the
|
|
|
|
* run queue again
|
|
|
|
*/
|
|
|
|
if (!z_is_thread_queued(thread) && z_is_thread_ready(thread)) {
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, sched_ready, thread);
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(0);
|
2022-04-06 18:58:20 +02:00
|
|
|
flag_ipi();
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2024-03-26 19:54:31 +01:00
|
|
|
void z_ready_thread_locked(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (!thread_active_elsewhere(thread)) {
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_ready_thread(struct k_thread *thread)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-02-20 00:24:24 +01:00
|
|
|
if (!thread_active_elsewhere(thread)) {
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
2020-01-23 22:28:30 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_move_thread_to_end_of_prio_q(struct k_thread *thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2020-09-05 20:50:18 +02:00
|
|
|
move_thread_to_end_of_prio_q(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_sched_start(struct k_thread *thread)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2020-01-23 22:28:30 +01:00
|
|
|
|
|
|
|
if (z_has_thread_started(thread)) {
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2020-01-23 22:28:30 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
z_mark_thread_as_started(thread);
|
|
|
|
ready_thread(thread);
|
2024-03-06 21:59:36 +01:00
|
|
|
z_reschedule(&_sched_spinlock, key);
|
2020-01-23 22:28:30 +01:00
|
|
|
}
|
|
|
|
|
2024-04-06 16:44:47 +02:00
|
|
|
/* Spins in ISR context, waiting for a thread known to be running on
|
|
|
|
* another CPU to catch the IPI we sent and halt. Note that we check
|
|
|
|
* for ourselves being asynchronously halted first to prevent simple
|
|
|
|
* deadlocks (but not complex ones involving cycles of 3+ threads!).
|
2023-08-14 20:22:05 +02:00
|
|
|
*/
|
2024-04-26 09:37:58 +02:00
|
|
|
static k_spinlock_key_t thread_halt_spin(struct k_thread *thread, k_spinlock_key_t key)
|
2023-08-14 20:22:05 +02:00
|
|
|
{
|
2024-04-06 16:44:47 +02:00
|
|
|
if (is_halting(_current)) {
|
2023-08-14 22:41:05 +02:00
|
|
|
halt_thread(_current,
|
2024-04-06 16:44:47 +02:00
|
|
|
is_aborting(_current) ? _THREAD_DEAD : _THREAD_SUSPENDED);
|
|
|
|
}
|
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
|
|
|
while (is_halting(thread)) {
|
2023-08-14 20:22:05 +02:00
|
|
|
}
|
2024-04-26 09:37:58 +02:00
|
|
|
key = k_spin_lock(&_sched_spinlock);
|
|
|
|
z_sched_switch_spin(thread);
|
|
|
|
return key;
|
2024-04-06 16:44:47 +02:00
|
|
|
}
|
2023-08-14 20:22:05 +02:00
|
|
|
|
2024-04-06 16:44:47 +02:00
|
|
|
/* Shared handler for k_thread_{suspend,abort}(). Called with the
|
|
|
|
* scheduler lock held and the key passed (which it may
|
|
|
|
* release/reacquire!) which will be released before a possible return
|
|
|
|
* (aborting _current will not return, obviously), which may be after
|
|
|
|
* a context switch.
|
|
|
|
*/
|
|
|
|
static void z_thread_halt(struct k_thread *thread, k_spinlock_key_t key,
|
|
|
|
bool terminate)
|
|
|
|
{
|
|
|
|
_wait_q_t *wq = &thread->join_queue;
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
wq = terminate ? wq : &thread->halt_queue;
|
|
|
|
#endif
|
2023-08-14 20:22:05 +02:00
|
|
|
|
2024-04-06 16:44:47 +02:00
|
|
|
/* If the target is a thread running on another CPU, flag and
|
|
|
|
* poke (note that we might spin to wait, so a true
|
|
|
|
* synchronous IPI is needed here, not deferred!), it will
|
|
|
|
* halt itself in the IPI. Otherwise it's unscheduled, so we
|
|
|
|
* can clean it up directly.
|
|
|
|
*/
|
|
|
|
if (thread_active_elsewhere(thread)) {
|
2023-08-14 22:41:05 +02:00
|
|
|
thread->base.thread_state |= (terminate ? _THREAD_ABORTING
|
2024-04-06 16:44:47 +02:00
|
|
|
: _THREAD_SUSPENDING);
|
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
2023-08-14 20:22:05 +02:00
|
|
|
arch_sched_ipi();
|
2024-04-06 16:44:47 +02:00
|
|
|
#endif
|
2023-08-14 20:22:05 +02:00
|
|
|
if (arch_is_in_isr()) {
|
2024-04-26 09:37:58 +02:00
|
|
|
key = thread_halt_spin(thread, key);
|
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2024-04-06 16:44:47 +02:00
|
|
|
} else {
|
|
|
|
add_to_waitq_locked(_current, wq);
|
2024-03-06 21:59:36 +01:00
|
|
|
z_swap(&_sched_spinlock, key);
|
2023-08-14 20:22:05 +02:00
|
|
|
}
|
2023-08-14 22:41:05 +02:00
|
|
|
} else {
|
2024-04-06 16:44:47 +02:00
|
|
|
halt_thread(thread, terminate ? _THREAD_DEAD : _THREAD_SUSPENDED);
|
|
|
|
if ((thread == _current) && !arch_is_in_isr()) {
|
|
|
|
z_swap(&_sched_spinlock, key);
|
|
|
|
__ASSERT(!terminate, "aborted _current back from dead");
|
|
|
|
} else {
|
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
|
|
|
}
|
2023-08-14 20:22:05 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-04-06 16:44:47 +02:00
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
void z_impl_k_thread_suspend(struct k_thread *thread)
|
2020-01-07 18:58:46 +01:00
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, suspend, thread);
|
|
|
|
|
2020-01-07 18:58:46 +01:00
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2020-01-07 18:58:46 +01:00
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_SUSPENDED) != 0U) {
|
|
|
|
|
|
|
|
/* The target thread is already suspended. Nothing to do. */
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2023-08-14 22:41:05 +02:00
|
|
|
return;
|
2020-01-07 18:58:46 +01:00
|
|
|
}
|
2021-03-26 10:59:08 +01:00
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
z_thread_halt(thread, key, false);
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, suspend, thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_suspend(struct k_thread *thread)
|
|
|
|
{
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
2020-02-14 19:52:49 +01:00
|
|
|
z_impl_k_thread_suspend(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_suspend_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2020-02-14 19:52:49 +01:00
|
|
|
|
|
|
|
void z_impl_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, resume, thread);
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2020-02-14 19:52:49 +01:00
|
|
|
|
2020-10-17 01:53:56 +02:00
|
|
|
/* Do not try to resume a thread that was not suspended */
|
|
|
|
if (!z_is_thread_suspended(thread)) {
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2020-10-17 01:53:56 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
z_reschedule(&_sched_spinlock, key);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, resume, thread);
|
2020-02-14 19:52:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
2020-02-14 19:52:49 +01:00
|
|
|
z_impl_k_thread_resume(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_resume_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2020-02-14 19:52:49 +01:00
|
|
|
|
2021-05-25 08:40:14 +02:00
|
|
|
static _wait_q_t *pended_on_thread(struct k_thread *thread)
|
2020-01-07 18:58:46 +01:00
|
|
|
{
|
|
|
|
__ASSERT_NO_MSG(thread->base.pended_on);
|
|
|
|
|
|
|
|
return thread->base.pended_on;
|
|
|
|
}
|
|
|
|
|
2020-01-23 22:04:15 +01:00
|
|
|
static void unready_thread(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2020-01-23 22:04:15 +01:00
|
|
|
}
|
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
/* _sched_spinlock must be held */
|
2020-02-21 01:33:06 +01:00
|
|
|
static void add_to_waitq_locked(struct k_thread *thread, _wait_q_t *wait_q)
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
{
|
2020-02-21 01:33:06 +01:00
|
|
|
unready_thread(thread);
|
|
|
|
z_mark_thread_as_pending(thread);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_pend, thread);
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
if (wait_q != NULL) {
|
|
|
|
thread->base.pended_on = wait_q;
|
2024-04-11 17:59:07 +02:00
|
|
|
_priq_wait_add(&wait_q->waitq, thread);
|
2018-09-26 22:19:31 +02:00
|
|
|
}
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
2018-09-26 22:19:31 +02:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (!K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
|
|
|
z_add_thread_timeout(thread, timeout);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2022-10-08 16:24:28 +02:00
|
|
|
static void pend_locked(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
2020-12-07 19:15:42 +01:00
|
|
|
#ifdef CONFIG_KERNEL_COHERENCE
|
2021-02-09 22:48:25 +01:00
|
|
|
__ASSERT_NO_MSG(wait_q == NULL || arch_mem_coherent(wait_q));
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_KERNEL_COHERENCE */
|
2022-10-08 16:24:28 +02:00
|
|
|
add_to_waitq_locked(thread, wait_q);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
add_thread_timeout(thread, timeout);
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2022-10-08 16:24:28 +02:00
|
|
|
pend_locked(thread, wait_q, timeout);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-03-09 21:17:45 +01:00
|
|
|
|
2020-09-05 20:44:01 +02:00
|
|
|
static inline void unpend_thread_no_timeout(struct k_thread *thread)
|
|
|
|
{
|
2021-05-25 08:40:14 +02:00
|
|
|
_priq_wait_remove(&pended_on_thread(thread)->waitq, thread);
|
2020-09-05 20:44:01 +02:00
|
|
|
z_mark_thread_as_not_pending(thread);
|
|
|
|
thread->base.pended_on = NULL;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_unpend_thread_no_timeout(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2023-01-06 19:20:28 +01:00
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2023-03-08 22:54:12 +01:00
|
|
|
void z_sched_wake_thread(struct k_thread *thread, bool is_timeout)
|
2018-09-28 01:50:00 +02:00
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2023-08-14 22:41:05 +02:00
|
|
|
bool killed = (thread->base.thread_state &
|
|
|
|
(_THREAD_DEAD | _THREAD_ABORTING));
|
2018-09-28 01:50:00 +02:00
|
|
|
|
2023-03-08 22:56:31 +01:00
|
|
|
#ifdef CONFIG_EVENTS
|
|
|
|
bool do_nothing = thread->no_wake_on_timeout && is_timeout;
|
|
|
|
|
|
|
|
thread->no_wake_on_timeout = false;
|
|
|
|
|
|
|
|
if (do_nothing) {
|
|
|
|
continue;
|
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_EVENTS */
|
2023-03-08 22:56:31 +01:00
|
|
|
|
2021-02-17 19:12:36 +01:00
|
|
|
if (!killed) {
|
2023-03-08 22:54:12 +01:00
|
|
|
/* The thread is not being killed */
|
2021-02-17 19:12:36 +01:00
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
z_mark_thread_as_started(thread);
|
2023-03-08 22:54:12 +01:00
|
|
|
if (is_timeout) {
|
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
}
|
2021-02-17 19:12:36 +01:00
|
|
|
ready_thread(thread);
|
2020-09-05 20:44:01 +02:00
|
|
|
}
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
2023-03-08 22:54:12 +01:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYS_CLOCK_EXISTS
|
|
|
|
/* Timeout handler for *_thread_timeout() APIs */
|
|
|
|
void z_thread_timeout(struct _timeout *timeout)
|
|
|
|
{
|
|
|
|
struct k_thread *thread = CONTAINER_OF(timeout,
|
|
|
|
struct k_thread, base.timeout);
|
|
|
|
|
|
|
|
z_sched_wake_thread(thread, true);
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SYS_CLOCK_EXISTS */
|
2018-09-28 01:50:00 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
_wait_q_t *wait_q, k_timeout_t timeout)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_TIMESLICING && CONFIG_SWAP_NONATOMIC */
|
2024-03-06 21:59:36 +01:00
|
|
|
__ASSERT_NO_MSG(sizeof(_sched_spinlock) == 0 || lock != &_sched_spinlock);
|
2022-10-08 16:24:28 +02:00
|
|
|
|
|
|
|
/* We do a "lock swap" prior to calling z_swap(), such that
|
|
|
|
* the caller's lock gets released as desired. But we ensure
|
|
|
|
* that we hold the scheduler lock and leave local interrupts
|
|
|
|
* masked until we reach the context swich. z_swap() itself
|
|
|
|
* has similar code; the duplication is because it's a legacy
|
|
|
|
* API that doesn't expect to be called with scheduler lock
|
|
|
|
* held.
|
|
|
|
*/
|
2024-03-06 21:59:36 +01:00
|
|
|
(void) k_spin_lock(&_sched_spinlock);
|
2022-10-08 16:24:28 +02:00
|
|
|
pend_locked(_current, wait_q, timeout);
|
|
|
|
k_spin_release(lock);
|
2024-03-06 21:59:36 +01:00
|
|
|
return z_swap(&_sched_spinlock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
|
|
|
|
2021-02-10 01:47:47 +01:00
|
|
|
struct k_thread *z_unpend1_no_timeout(_wait_q_t *wait_q)
|
|
|
|
{
|
|
|
|
struct k_thread *thread = NULL;
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-02-10 01:47:47 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return thread;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_unpend_first_thread(_wait_q_t *wait_q)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-02-10 01:47:47 +01:00
|
|
|
struct k_thread *thread = NULL;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-02-10 01:47:47 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_unpend_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
z_unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
/* Priority set utility that does no rescheduling, it just changes the
|
|
|
|
* run queue state, returning true if a reschedule is needed later.
|
|
|
|
*/
|
2024-02-24 17:37:56 +01:00
|
|
|
bool z_thread_prio_set(struct k_thread *thread, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-09-21 00:43:57 +02:00
|
|
|
bool need_sched = 0;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2019-03-08 22:19:05 +01:00
|
|
|
need_sched = z_is_thread_ready(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
if (need_sched) {
|
2019-07-01 19:25:55 +02:00
|
|
|
/* Don't requeue on SMP if it's the running thread */
|
|
|
|
if (!IS_ENABLED(CONFIG_SMP) || z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2019-07-01 19:25:55 +02:00
|
|
|
thread->base.prio = prio;
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2019-07-01 19:25:55 +02:00
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(1);
|
2018-05-03 23:51:49 +02:00
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
|
|
|
}
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, sched_priority_set, thread, prio);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
return need_sched;
|
|
|
|
}
|
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
static inline bool resched(uint32_t key)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
_current_cpu->swap_ok = 0;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2018-05-30 20:23:02 +02:00
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
return arch_irq_unlocked(key) && !arch_is_in_isr();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
2020-08-10 21:47:02 +02:00
|
|
|
/*
|
|
|
|
* Check if the next ready thread is the same as the current thread
|
|
|
|
* and save the trip if true.
|
|
|
|
*/
|
|
|
|
static inline bool need_swap(void)
|
|
|
|
{
|
|
|
|
/* the SMP case will be handled in C based z_swap() */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return true;
|
|
|
|
#else
|
|
|
|
struct k_thread *new_thread;
|
|
|
|
|
|
|
|
/* Check if the next ready thread is the same as the current thread */
|
2021-02-18 19:15:23 +01:00
|
|
|
new_thread = _kernel.ready_q.cache;
|
2020-08-10 21:47:02 +02:00
|
|
|
return new_thread != _current;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2020-08-10 21:47:02 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2020-08-10 21:47:02 +02:00
|
|
|
if (resched(key.key) && need_swap()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap(lock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
k_spin_unlock(lock, key);
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
void z_reschedule_irqlock(uint32_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2023-12-21 03:01:54 +01:00
|
|
|
if (resched(key) && need_swap()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap_irqlock(key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
irq_unlock(key);
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-26 19:54:40 +02:00
|
|
|
}
|
|
|
|
|
2016-11-10 20:46:58 +01:00
|
|
|
void k_sched_lock(void)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_lock);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_sched_lock();
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2016-11-10 20:46:58 +01:00
|
|
|
}
|
|
|
|
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
void k_sched_unlock(void)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-03-29 16:03:49 +02:00
|
|
|
__ASSERT(_current->base.sched_locked != 0U, "");
|
2020-02-06 22:39:52 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
++_current->base.sched_locked;
|
2019-07-31 04:19:08 +02:00
|
|
|
update_cache(0);
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-12-02 16:24:08 +01:00
|
|
|
LOG_DBG("scheduler unlocked (%p:%d)",
|
2016-11-18 22:08:24 +01:00
|
|
|
_current, _current->base.sched_locked);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_unlock);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2021-02-18 19:15:23 +01:00
|
|
|
struct k_thread *z_swap_next_thread(void)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2021-02-18 19:15:23 +01:00
|
|
|
#ifdef CONFIG_SMP
|
2022-04-06 19:10:17 +02:00
|
|
|
struct k_thread *ret = next_up();
|
|
|
|
|
|
|
|
if (ret == _current) {
|
|
|
|
/* When not swapping, have to signal IPIs here. In
|
|
|
|
* the context switch case it must happen later, after
|
|
|
|
* _current gets requeued.
|
|
|
|
*/
|
|
|
|
signal_pending_ipi();
|
|
|
|
}
|
|
|
|
return ret;
|
2021-02-18 19:15:23 +01:00
|
|
|
#else
|
|
|
|
return _kernel.ready_q.cache;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2021-02-18 19:15:23 +01:00
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2021-12-06 18:56:33 +01:00
|
|
|
#ifdef CONFIG_USE_SWITCH
|
2019-02-20 02:24:30 +01:00
|
|
|
/* Just a wrapper around _current = xxx with tracing */
|
|
|
|
static inline void set_current(struct k_thread *new_thread)
|
|
|
|
{
|
2020-08-28 01:12:01 +02:00
|
|
|
z_thread_mark_switched_out();
|
2020-02-06 22:39:52 +01:00
|
|
|
_current_cpu->current = new_thread;
|
2019-02-20 02:24:30 +01:00
|
|
|
}
|
|
|
|
|
2022-03-16 03:36:20 +01:00
|
|
|
/**
|
|
|
|
* @brief Determine next thread to execute upon completion of an interrupt
|
|
|
|
*
|
|
|
|
* Thread preemption is performed by context switching after the completion
|
|
|
|
* of a non-recursed interrupt. This function determines which thread to
|
|
|
|
* switch to if any. This function accepts as @p interrupted either:
|
|
|
|
*
|
|
|
|
* - The handle for the interrupted thread in which case the thread's context
|
|
|
|
* must already be fully saved and ready to be picked up by a different CPU.
|
|
|
|
*
|
|
|
|
* - NULL if more work is required to fully save the thread's state after
|
|
|
|
* it is known that a new thread is to be scheduled. It is up to the caller
|
|
|
|
* to store the handle resulting from the thread that is being switched out
|
|
|
|
* in that thread's "switch_handle" field after its
|
|
|
|
* context has fully been saved, following the same requirements as with
|
|
|
|
* the @ref arch_switch() function.
|
|
|
|
*
|
|
|
|
* If a new thread needs to be scheduled then its handle is returned.
|
|
|
|
* Otherwise the same value provided as @p interrupted is returned back.
|
|
|
|
* Those handles are the same opaque types used by the @ref arch_switch()
|
|
|
|
* function.
|
|
|
|
*
|
|
|
|
* @warning
|
|
|
|
* The @ref _current value may have changed after this call and not refer
|
|
|
|
* to the interrupted thread anymore. It might be necessary to make a local
|
|
|
|
* copy before calling this function.
|
|
|
|
*
|
|
|
|
* @param interrupted Handle for the thread that was interrupted or NULL.
|
|
|
|
* @retval Handle for the next thread to execute, or @p interrupted when
|
|
|
|
* no new thread is to be scheduled.
|
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
void *z_get_next_switch_handle(void *interrupted)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-30 00:25:27 +01:00
|
|
|
z_check_stack_sentinel();
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
2021-02-05 17:15:02 +01:00
|
|
|
void *ret = NULL;
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
struct k_thread *old_thread = _current, *new_thread;
|
2018-05-30 20:23:02 +02:00
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (IS_ENABLED(CONFIG_SMP)) {
|
|
|
|
old_thread->switch_handle = NULL;
|
|
|
|
}
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
new_thread = next_up();
|
|
|
|
|
kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-27 17:22:43 +02:00
|
|
|
z_sched_usage_switch(new_thread);
|
|
|
|
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
if (old_thread != new_thread) {
|
|
|
|
update_metairq_preempt(new_thread);
|
2023-05-26 18:12:51 +02:00
|
|
|
z_sched_switch_spin(new_thread);
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
arch_cohere_stacks(old_thread, interrupted, new_thread);
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
_current_cpu->swap_ok = 0;
|
2024-04-26 16:32:47 +02:00
|
|
|
new_thread->base.cpu = arch_curr_cpu()->id;
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
set_current(new_thread);
|
|
|
|
|
2021-12-01 03:26:26 +01:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
z_reset_time_slice(new_thread);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_TIMESLICING */
|
2021-12-01 03:26:26 +01:00
|
|
|
|
2019-12-13 11:24:56 +01:00
|
|
|
#ifdef CONFIG_SPIN_VALIDATE
|
2019-02-20 19:07:31 +01:00
|
|
|
/* Changed _current! Update the spinlock
|
2021-04-30 15:58:20 +02:00
|
|
|
* bookkeeping so the validation doesn't get
|
2019-02-20 19:07:31 +01:00
|
|
|
* confused when the "wrong" thread tries to
|
|
|
|
* release the lock.
|
|
|
|
*/
|
2024-03-06 21:59:36 +01:00
|
|
|
z_spin_lock_set_owner(&_sched_spinlock);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SPIN_VALIDATE */
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
|
|
|
|
/* A queued (runnable) old/current thread
|
|
|
|
* needs to be added back to the run queue
|
|
|
|
* here, and atomically with its switch handle
|
|
|
|
* being set below. This is safe now, as we
|
|
|
|
* will not return into it.
|
|
|
|
*/
|
|
|
|
if (z_is_thread_queued(old_thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_add(old_thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
old_thread->switch_handle = interrupted;
|
2021-02-05 17:15:02 +01:00
|
|
|
ret = new_thread->switch_handle;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (IS_ENABLED(CONFIG_SMP)) {
|
|
|
|
/* Active threads MUST have a null here */
|
|
|
|
new_thread->switch_handle = NULL;
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2021-02-05 17:15:02 +01:00
|
|
|
return ret;
|
2018-05-30 20:23:02 +02:00
|
|
|
#else
|
kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-27 17:22:43 +02:00
|
|
|
z_sched_usage_switch(_kernel.ready_q.cache);
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
_current->switch_handle = interrupted;
|
2021-02-18 19:15:23 +01:00
|
|
|
set_current(_kernel.ready_q.cache);
|
2018-05-03 23:51:49 +02:00
|
|
|
return _current->switch_handle;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USE_SWITCH */
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_unpend_all(_wait_q_t *wait_q)
|
2018-05-10 18:45:42 +02:00
|
|
|
{
|
2018-05-10 20:10:34 +02:00
|
|
|
int need_sched = 0;
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread;
|
2018-05-10 18:45:42 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
while ((thread = z_waitq_head(wait_q)) != NULL) {
|
|
|
|
z_unpend_thread(thread);
|
|
|
|
z_ready_thread(thread);
|
2018-05-10 18:45:42 +02:00
|
|
|
need_sched = 1;
|
|
|
|
}
|
2018-05-10 20:10:34 +02:00
|
|
|
|
|
|
|
return need_sched;
|
2018-05-10 18:45:42 +02:00
|
|
|
}
|
|
|
|
|
2024-02-28 14:15:15 +01:00
|
|
|
void init_ready_q(struct _ready_q *ready_q)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-09-24 22:49:14 +02:00
|
|
|
#if defined(CONFIG_SCHED_SCALABLE)
|
2024-02-28 14:15:15 +01:00
|
|
|
ready_q->runq = (struct _priq_rb) {
|
2018-05-03 23:51:49 +02:00
|
|
|
.tree = {
|
2019-03-08 22:19:05 +01:00
|
|
|
.lessthan_fn = z_priq_rb_lessthan,
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
};
|
2021-09-24 22:49:14 +02:00
|
|
|
#elif defined(CONFIG_SCHED_MULTIQ)
|
2018-06-28 19:38:14 +02:00
|
|
|
for (int i = 0; i < ARRAY_SIZE(_kernel.ready_q.runq.queues); i++) {
|
2024-02-28 14:15:15 +01:00
|
|
|
sys_dlist_init(&ready_q->runq.queues[i]);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
2021-09-24 22:49:14 +02:00
|
|
|
#else
|
2024-02-28 14:15:15 +01:00
|
|
|
sys_dlist_init(&ready_q->runq);
|
2018-06-28 19:38:14 +02:00
|
|
|
#endif
|
2021-09-24 22:49:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void z_sched_init(void)
|
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
2023-03-16 22:54:25 +01:00
|
|
|
for (int i = 0; i < CONFIG_MP_MAX_NUM_CPUS; i++) {
|
2021-09-24 19:57:39 +02:00
|
|
|
init_ready_q(&_kernel.cpus[i].ready_q);
|
|
|
|
}
|
|
|
|
#else
|
2021-09-24 22:49:14 +02:00
|
|
|
init_ready_q(&_kernel.ready_q);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SCHED_CPU_MASK_PIN_ONLY */
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2021-03-29 16:54:23 +02:00
|
|
|
void z_impl_k_thread_priority_set(k_tid_t thread, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2016-11-08 21:44:05 +01:00
|
|
|
/*
|
|
|
|
* Use NULL, since we cannot know what the entry point is (we do not
|
|
|
|
* keep track of it) and idle cannot change its priority.
|
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
Z_ASSERT_VALID_PRIO(prio, NULL);
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-02-24 17:37:56 +01:00
|
|
|
bool need_sched = z_thread_prio_set((struct k_thread *)thread, prio);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-02-24 16:37:06 +01:00
|
|
|
flag_ipi();
|
|
|
|
if (need_sched && _current->base.sched_locked == 0U) {
|
|
|
|
z_reschedule_unlocked();
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_thread_priority_set(k_tid_t thread, int prio)
|
2017-09-29 23:00:48 +02:00
|
|
|
{
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
K_OOPS(K_SYSCALL_VERIFY_MSG(_is_valid_prio(prio, NULL),
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
"invalid thread priority %d", prio));
|
2024-02-24 16:37:06 +01:00
|
|
|
#ifndef CONFIG_USERSPACE_THREAD_MAY_RAISE_PRIORITY
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_VERIFY_MSG((int8_t)prio >= thread->base.prio,
|
2018-05-05 00:57:57 +02:00
|
|
|
"thread priority may only be downgraded (%d < %d)",
|
|
|
|
prio, thread->base.prio));
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE_THREAD_MAY_RAISE_PRIORITY */
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
z_impl_k_thread_priority_set(thread, prio);
|
2017-09-29 23:00:48 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_thread_priority_set_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2017-09-29 23:00:48 +02:00
|
|
|
|
2018-05-15 20:06:25 +02:00
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2024-03-08 17:42:08 +01:00
|
|
|
int32_t newdl = k_cycle_get_32() + deadline;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2024-03-08 17:42:08 +01:00
|
|
|
/* The prio_deadline field changes the sorting order, so can't
|
|
|
|
* change it while the thread is in the run queue (dlists
|
|
|
|
* actually are benign as long as we requeue it before we
|
|
|
|
* release the lock, but an rbtree will blow up if we break
|
|
|
|
* sorting!)
|
|
|
|
*/
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2024-03-08 17:42:08 +01:00
|
|
|
thread->base.prio_deadline = newdl;
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2024-03-08 17:42:08 +01:00
|
|
|
} else {
|
|
|
|
thread->base.prio_deadline = newdl;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2019-08-13 20:34:34 +02:00
|
|
|
static inline void z_vrfy_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
K_OOPS(K_SYSCALL_VERIFY_MSG(deadline > 0,
|
2018-05-15 20:06:25 +02:00
|
|
|
"invalid thread deadline %d",
|
|
|
|
(int)deadline));
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_impl_k_thread_deadline_set((k_tid_t)thread, deadline);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
2019-08-13 20:34:34 +02:00
|
|
|
#include <syscalls/k_thread_deadline_set_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
|
|
|
#endif /* CONFIG_SCHED_DEADLINE */
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2022-03-26 00:55:23 +01:00
|
|
|
bool k_can_yield(void)
|
|
|
|
{
|
|
|
|
return !(k_is_pre_kernel() || k_is_in_isr() ||
|
|
|
|
z_is_idle_thread_object(_current));
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_yield(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, yield);
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2021-03-01 19:14:13 +01:00
|
|
|
|
2021-05-14 00:46:43 +02:00
|
|
|
if (!IS_ENABLED(CONFIG_SMP) ||
|
|
|
|
z_is_thread_queued(_current)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(_current);
|
2021-05-14 00:46:43 +02:00
|
|
|
update_cache(1);
|
2024-03-06 21:59:36 +01:00
|
|
|
z_swap(&_sched_spinlock, key);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_yield(void)
|
|
|
|
{
|
|
|
|
z_impl_k_yield();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_yield_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2017-09-29 23:00:48 +02:00
|
|
|
|
2020-10-20 06:37:22 +02:00
|
|
|
static int32_t z_tick_sleep(k_ticks_t ticks)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2020-11-16 19:40:46 +01:00
|
|
|
uint32_t expected_wakeup_ticks;
|
2016-12-02 15:31:08 +01:00
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2022-11-23 13:42:04 +01:00
|
|
|
LOG_DBG("thread %p for %lu ticks", _current, (unsigned long)ticks);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2016-12-10 01:57:17 +01:00
|
|
|
/* wait of 0 ms is treated as a 'yield' */
|
2019-05-08 22:22:46 +02:00
|
|
|
if (ticks == 0) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
k_yield();
|
2018-10-25 17:45:08 +02:00
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2021-05-26 00:49:28 +02:00
|
|
|
if (Z_TICK_ABS(ticks) <= 0) {
|
|
|
|
expected_wakeup_ticks = ticks + sys_clock_tick_get_32();
|
|
|
|
} else {
|
|
|
|
expected_wakeup_ticks = Z_TICK_ABS(ticks);
|
|
|
|
}
|
2019-02-06 00:36:01 +01:00
|
|
|
|
2023-10-16 20:15:31 +02:00
|
|
|
k_timeout_t timeout = Z_TIMEOUT_TICKS(ticks);
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_TIMESLICING && CONFIG_SWAP_NONATOMIC */
|
2020-09-05 21:53:42 +02:00
|
|
|
unready_thread(_current);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
z_add_thread_timeout(_current, timeout);
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_suspended(_current);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
(void)z_swap(&_sched_spinlock, key);
|
2018-10-25 17:45:08 +02:00
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
__ASSERT(!z_is_thread_state_set(_current, _THREAD_SUSPENDED), "");
|
|
|
|
|
2021-03-13 14:19:53 +01:00
|
|
|
ticks = (k_ticks_t)expected_wakeup_ticks - sys_clock_tick_get_32();
|
2018-10-25 17:45:08 +02:00
|
|
|
if (ticks > 0) {
|
2019-05-08 22:22:46 +02:00
|
|
|
return ticks;
|
2018-10-25 17:45:08 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_sleep(k_timeout_t timeout)
|
2019-05-08 22:22:46 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
k_ticks_t ticks;
|
2019-05-08 22:22:46 +02:00
|
|
|
|
2019-12-12 23:07:07 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_ENTER(k_thread, sleep, timeout);
|
2019-12-12 23:07:07 +01:00
|
|
|
|
2020-10-17 13:52:17 +02:00
|
|
|
/* in case of K_FOREVER, we suspend */
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
2024-03-28 15:09:26 +01:00
|
|
|
|
2019-11-08 19:44:22 +01:00
|
|
|
k_thread_suspend(_current);
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, sleep, timeout, (int32_t) K_TICKS_FOREVER);
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
return (int32_t) K_TICKS_FOREVER;
|
2019-11-08 19:44:22 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
ticks = timeout.ticks;
|
|
|
|
|
2019-05-08 22:22:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
2023-12-05 19:40:19 +01:00
|
|
|
int32_t ret = k_ticks_to_ms_ceil64(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, sleep, timeout, ret);
|
|
|
|
|
|
|
|
return ret;
|
2019-05-08 22:22:46 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_sleep(k_timeout_t timeout)
|
2017-09-27 23:45:10 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
return z_impl_k_sleep(timeout);
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_sleep_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2019-05-10 01:46:46 +02:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t ticks;
|
2019-05-10 01:46:46 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC_ENTER(k_thread, usleep, us);
|
|
|
|
|
2019-10-03 20:43:10 +02:00
|
|
|
ticks = k_us_to_ticks_ceil64(us);
|
2019-05-10 01:46:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
2023-12-05 19:40:19 +01:00
|
|
|
int32_t ret = k_ticks_to_us_ceil64(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
2023-12-05 19:40:19 +01:00
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, usleep, us, ret);
|
|
|
|
|
|
|
|
return ret;
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
|
|
|
return z_impl_k_usleep(us);
|
2017-09-27 23:45:10 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_usleep_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2017-09-27 23:45:10 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_wakeup(k_tid_t thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, wakeup, thread);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_thread_pending(thread)) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_abort_thread_timeout(thread) < 0) {
|
2019-11-08 19:44:22 +01:00
|
|
|
/* Might have just been sleeping forever */
|
|
|
|
if (thread->base.thread_state != _THREAD_SUSPENDED) {
|
|
|
|
return;
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2024-02-20 17:50:54 +01:00
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_not_suspended(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2024-02-20 17:50:54 +01:00
|
|
|
if (!thread_active_elsewhere(thread)) {
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
2020-02-04 22:52:09 +01:00
|
|
|
|
2024-02-20 17:50:54 +01:00
|
|
|
if (arch_is_in_isr()) {
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2024-02-20 17:50:54 +01:00
|
|
|
} else {
|
2024-03-06 21:59:36 +01:00
|
|
|
z_reschedule(&_sched_spinlock, key);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
2019-02-20 01:03:39 +01:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_wakeup(k_tid_t thread)
|
|
|
|
{
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
z_impl_k_wakeup(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_wakeup_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2017-09-29 23:00:48 +02:00
|
|
|
|
2023-09-25 20:56:10 +02:00
|
|
|
k_tid_t z_impl_k_sched_current_thread_query(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2020-02-06 22:39:52 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/* In SMP, _current is a field read from _current_cpu, which
|
|
|
|
* can race with preemption before it is read. We must lock
|
|
|
|
* local interrupts when reading it.
|
|
|
|
*/
|
|
|
|
unsigned int k = arch_irq_lock();
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2020-02-06 22:39:52 +01:00
|
|
|
|
|
|
|
k_tid_t ret = _current_cpu->current;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
arch_irq_unlock(k);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2020-02-06 22:39:52 +01:00
|
|
|
return ret;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
2023-09-25 20:56:10 +02:00
|
|
|
static inline k_tid_t z_vrfy_k_sched_current_thread_query(void)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
{
|
2023-09-25 20:56:10 +02:00
|
|
|
return z_impl_k_sched_current_thread_query();
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
}
|
2023-09-25 20:56:10 +02:00
|
|
|
#include <syscalls/k_sched_current_thread_query_mrsh.c>
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2017-09-27 23:45:10 +02:00
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
static inline void unpend_all(_wait_q_t *wait_q)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
|
|
|
|
while ((thread = z_waitq_head(wait_q)) != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
arch_thread_return_value_set(thread, 0);
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-02-22 20:10:17 +01:00
|
|
|
#ifdef CONFIG_THREAD_ABORT_HOOK
|
|
|
|
extern void thread_abort_hook(struct k_thread *thread);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_THREAD_ABORT_HOOK */
|
2021-09-06 07:59:40 +02:00
|
|
|
|
2023-08-14 20:06:52 +02:00
|
|
|
/**
|
|
|
|
* @brief Dequeues the specified thread
|
|
|
|
*
|
|
|
|
* Dequeues the specified thread and move it into the specified new state.
|
|
|
|
*
|
|
|
|
* @param thread Identify the thread to halt
|
2023-08-14 22:41:05 +02:00
|
|
|
* @param new_state New thread state (_THREAD_DEAD or _THREAD_SUSPENDED)
|
2023-08-14 20:06:52 +02:00
|
|
|
*/
|
|
|
|
static void halt_thread(struct k_thread *thread, uint8_t new_state)
|
2021-02-20 00:32:19 +01:00
|
|
|
{
|
|
|
|
/* We hold the lock, and the thread is known not to be running
|
|
|
|
* anywhere.
|
|
|
|
*/
|
2023-08-14 20:06:52 +02:00
|
|
|
if ((thread->base.thread_state & new_state) == 0U) {
|
|
|
|
thread->base.thread_state |= new_state;
|
2024-04-26 09:37:42 +02:00
|
|
|
clear_halting(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
2023-08-14 22:41:05 +02:00
|
|
|
|
|
|
|
if (new_state == _THREAD_DEAD) {
|
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
unpend_all(&thread->join_queue);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
2023-08-14 22:41:05 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
unpend_all(&thread->halt_queue);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_SMP */
|
2021-02-20 00:32:19 +01:00
|
|
|
update_cache(1);
|
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
if (new_state == _THREAD_SUSPENDED) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2023-08-13 23:41:52 +02:00
|
|
|
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
|
|
|
|
arch_float_disable(thread);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
|
2023-08-13 23:41:52 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_abort, thread);
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
z_thread_monitor_exit(thread);
|
2024-02-22 20:10:17 +01:00
|
|
|
#ifdef CONFIG_THREAD_ABORT_HOOK
|
|
|
|
thread_abort_hook(thread);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_THREAD_ABORT_HOOK */
|
2021-09-06 07:59:40 +02:00
|
|
|
|
kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
sys_mem_blocks, k_mem_slab
_cpu, z_kernel
k_thread, k_timer
k_condvar, k_event, k_mutex, k_sem
k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-05-11 20:06:46 +02:00
|
|
|
#ifdef CONFIG_OBJ_CORE_THREAD
|
2023-06-01 18:16:40 +02:00
|
|
|
#ifdef CONFIG_OBJ_CORE_STATS_THREAD
|
|
|
|
k_obj_core_stats_deregister(K_OBJ_CORE(thread));
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_OBJ_CORE_STATS_THREAD */
|
kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
sys_mem_blocks, k_mem_slab
_cpu, z_kernel
k_thread, k_timer
k_condvar, k_event, k_mutex, k_sem
k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-05-11 20:06:46 +02:00
|
|
|
k_obj_core_unlink(K_OBJ_CORE(thread));
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_OBJ_CORE_THREAD */
|
kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
sys_mem_blocks, k_mem_slab
_cpu, z_kernel
k_thread, k_timer
k_condvar, k_event, k_mutex, k_sem
k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-05-11 20:06:46 +02:00
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
z_mem_domain_exit_thread(thread);
|
2023-09-27 12:45:48 +02:00
|
|
|
k_thread_perms_all_clear(thread);
|
2023-09-27 12:45:18 +02:00
|
|
|
k_object_uninit(thread->stack_obj);
|
|
|
|
k_object_uninit(thread);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2024-03-26 19:54:31 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_THREAD_ABORT_NEED_CLEANUP
|
|
|
|
k_thread_abort_cleanup(thread);
|
|
|
|
#endif /* CONFIG_THREAD_ABORT_NEED_CLEANUP */
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void z_thread_abort(struct k_thread *thread)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2021-02-20 00:32:19 +01:00
|
|
|
|
2024-02-23 04:24:36 +01:00
|
|
|
if (z_is_thread_essential(thread)) {
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2022-05-19 21:55:28 +02:00
|
|
|
__ASSERT(false, "aborting essential thread %p", thread);
|
|
|
|
k_panic();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) != 0U) {
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2021-02-20 00:32:19 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2023-08-14 22:41:05 +02:00
|
|
|
z_thread_halt(thread, key, true);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
#if !defined(CONFIG_ARCH_HAS_THREAD_ABORT)
|
|
|
|
void z_impl_k_thread_abort(struct k_thread *thread)
|
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, abort, thread);
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
z_thread_abort(thread);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, abort, thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* !CONFIG_ARCH_HAS_THREAD_ABORT */
|
2021-02-20 00:32:19 +01:00
|
|
|
|
|
|
|
int z_impl_k_thread_join(struct k_thread *thread, k_timeout_t timeout)
|
|
|
|
{
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&_sched_spinlock);
|
2021-02-20 00:32:19 +01:00
|
|
|
int ret = 0;
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, join, thread, timeout);
|
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) != 0U) {
|
2023-05-26 18:39:16 +02:00
|
|
|
z_sched_switch_spin(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
ret = 0;
|
|
|
|
} else if (K_TIMEOUT_EQ(timeout, K_NO_WAIT)) {
|
|
|
|
ret = -EBUSY;
|
2021-03-29 23:13:47 +02:00
|
|
|
} else if ((thread == _current) ||
|
|
|
|
(thread->base.pended_on == &_current->join_queue)) {
|
2021-02-20 00:32:19 +01:00
|
|
|
ret = -EDEADLK;
|
|
|
|
} else {
|
|
|
|
__ASSERT(!arch_is_in_isr(), "cannot join in ISR");
|
|
|
|
add_to_waitq_locked(_current, &thread->join_queue);
|
|
|
|
add_thread_timeout(_current, timeout);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_BLOCKING(k_thread, join, thread, timeout);
|
2024-03-06 21:59:36 +01:00
|
|
|
ret = z_swap(&_sched_spinlock, key);
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, join, thread, timeout, ret);
|
|
|
|
|
|
|
|
return ret;
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, join, thread, timeout, ret);
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
k_spin_unlock(&_sched_spinlock, key);
|
2021-02-20 00:32:19 +01:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
/* Special case: don't oops if the thread is uninitialized. This is because
|
|
|
|
* the initialization bit does double-duty for thread objects; if false, means
|
|
|
|
* the thread object is truly uninitialized, or the thread ran and exited for
|
|
|
|
* some reason.
|
|
|
|
*
|
|
|
|
* Return true in this case indicating we should just do nothing and return
|
|
|
|
* success to the caller.
|
|
|
|
*/
|
|
|
|
static bool thread_obj_validate(struct k_thread *thread)
|
|
|
|
{
|
2023-09-27 12:49:28 +02:00
|
|
|
struct k_object *ko = k_object_find(thread);
|
2023-09-27 12:50:26 +02:00
|
|
|
int ret = k_object_validate(ko, K_OBJ_THREAD, _OBJ_INIT_TRUE);
|
2020-02-21 01:33:06 +01:00
|
|
|
|
|
|
|
switch (ret) {
|
|
|
|
case 0:
|
|
|
|
return false;
|
|
|
|
case -EINVAL:
|
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
#ifdef CONFIG_LOG
|
2023-09-27 12:51:23 +02:00
|
|
|
k_object_dump_error(ret, thread, ko, K_OBJ_THREAD);
|
2024-03-08 12:00:10 +01:00
|
|
|
#endif /* CONFIG_LOG */
|
2023-09-27 13:20:28 +02:00
|
|
|
K_OOPS(K_SYSCALL_VERIFY_MSG(ret, "access denied"));
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
2021-01-15 10:09:58 +01:00
|
|
|
CODE_UNREACHABLE; /* LCOV_EXCL_LINE */
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static inline int z_vrfy_k_thread_join(struct k_thread *thread,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return z_impl_k_thread_join(thread, timeout);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_join_mrsh.c>
|
2020-03-25 00:09:24 +01:00
|
|
|
|
|
|
|
static inline void z_vrfy_k_thread_abort(k_tid_t thread)
|
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2024-02-23 04:24:36 +01:00
|
|
|
K_OOPS(K_SYSCALL_VERIFY_MSG(!z_is_thread_essential(thread),
|
2020-03-25 00:09:24 +01:00
|
|
|
"aborting essential thread %p", thread));
|
|
|
|
|
|
|
|
z_impl_k_thread_abort((struct k_thread *)thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_abort_mrsh.c>
|
2020-02-21 01:33:06 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2021-01-12 20:45:32 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* future scheduler.h API implementations
|
|
|
|
*/
|
|
|
|
bool z_sched_wake(_wait_q_t *wait_q, int swap_retval, void *swap_data)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
bool ret = false;
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2021-01-12 20:45:32 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
z_thread_return_value_set_with_data(thread,
|
|
|
|
swap_retval,
|
|
|
|
swap_data);
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int z_sched_wait(struct k_spinlock *lock, k_spinlock_key_t key,
|
|
|
|
_wait_q_t *wait_q, k_timeout_t timeout, void **data)
|
|
|
|
{
|
|
|
|
int ret = z_pend_curr(lock, key, wait_q, timeout);
|
|
|
|
|
|
|
|
if (data != NULL) {
|
|
|
|
*data = _current->base.swap_data;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2023-01-05 17:50:21 +01:00
|
|
|
|
|
|
|
int z_sched_waitq_walk(_wait_q_t *wait_q,
|
|
|
|
int (*func)(struct k_thread *, void *), void *data)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
int status = 0;
|
|
|
|
|
2024-03-06 21:59:36 +01:00
|
|
|
K_SPINLOCK(&_sched_spinlock) {
|
2023-01-05 17:50:21 +01:00
|
|
|
_WAIT_Q_FOR_EACH(wait_q, thread) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Invoke the callback function on each waiting thread
|
|
|
|
* for as long as there are both waiting threads AND
|
|
|
|
* it returns 0.
|
|
|
|
*/
|
|
|
|
|
|
|
|
status = func(thread, data);
|
|
|
|
if (status != 0) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|