unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
/*
|
2018-05-03 23:51:49 +02:00
|
|
|
* Copyright (c) 2018 Intel Corporation
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*
|
2017-01-19 02:01:01 +01:00
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*/
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/kernel.h>
|
2016-10-13 16:31:48 +02:00
|
|
|
#include <ksched.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/spinlock.h>
|
2023-09-18 14:34:26 +02:00
|
|
|
#include <zephyr/kernel/internal/sched_priq.h>
|
2023-08-29 19:03:12 +02:00
|
|
|
#include <wait_q.h>
|
2018-01-26 00:24:15 +01:00
|
|
|
#include <kswap.h>
|
2018-05-03 23:51:49 +02:00
|
|
|
#include <kernel_arch_func.h>
|
2023-09-27 00:46:01 +02:00
|
|
|
#include <zephyr/internal/syscall_handler.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/drivers/timer/system_timer.h>
|
2018-11-22 01:22:15 +01:00
|
|
|
#include <stdbool.h>
|
2019-09-22 02:54:37 +02:00
|
|
|
#include <kernel_internal.h>
|
2022-05-06 11:04:23 +02:00
|
|
|
#include <zephyr/logging/log.h>
|
|
|
|
#include <zephyr/sys/atomic.h>
|
|
|
|
#include <zephyr/sys/math_extras.h>
|
|
|
|
#include <zephyr/timing/timing.h>
|
2023-04-11 15:34:39 +02:00
|
|
|
#include <zephyr/sys/util.h>
|
2021-09-28 18:38:43 +02:00
|
|
|
|
2020-11-26 19:32:34 +01:00
|
|
|
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2018-06-27 20:20:50 +02:00
|
|
|
#if defined(CONFIG_SCHED_DUMB)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_dumb_add
|
|
|
|
#define _priq_run_remove z_priq_dumb_remove
|
2019-01-31 00:00:42 +01:00
|
|
|
# if defined(CONFIG_SCHED_CPU_MASK)
|
|
|
|
# define _priq_run_best _priq_dumb_mask_best
|
|
|
|
# else
|
2019-03-08 22:19:05 +01:00
|
|
|
# define _priq_run_best z_priq_dumb_best
|
2019-01-31 00:00:42 +01:00
|
|
|
# endif
|
2018-06-27 20:20:50 +02:00
|
|
|
#elif defined(CONFIG_SCHED_SCALABLE)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_rb_add
|
|
|
|
#define _priq_run_remove z_priq_rb_remove
|
|
|
|
#define _priq_run_best z_priq_rb_best
|
2018-06-28 19:38:14 +02:00
|
|
|
#elif defined(CONFIG_SCHED_MULTIQ)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_mq_add
|
|
|
|
#define _priq_run_remove z_priq_mq_remove
|
|
|
|
#define _priq_run_best z_priq_mq_best
|
2021-12-21 00:24:30 +01:00
|
|
|
static ALWAYS_INLINE void z_priq_mq_add(struct _priq_mq *pq,
|
|
|
|
struct k_thread *thread);
|
|
|
|
static ALWAYS_INLINE void z_priq_mq_remove(struct _priq_mq *pq,
|
|
|
|
struct k_thread *thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
#endif
|
2016-11-08 16:36:50 +01:00
|
|
|
|
2018-06-27 20:20:50 +02:00
|
|
|
#if defined(CONFIG_WAITQ_SCALABLE)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define z_priq_wait_add z_priq_rb_add
|
|
|
|
#define _priq_wait_remove z_priq_rb_remove
|
|
|
|
#define _priq_wait_best z_priq_rb_best
|
2018-06-27 20:20:50 +02:00
|
|
|
#elif defined(CONFIG_WAITQ_DUMB)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define z_priq_wait_add z_priq_dumb_add
|
|
|
|
#define _priq_wait_remove z_priq_dumb_remove
|
|
|
|
#define _priq_wait_best z_priq_dumb_best
|
2018-04-11 23:52:47 +02:00
|
|
|
#endif
|
|
|
|
|
2021-02-18 19:15:23 +01:00
|
|
|
struct k_spinlock sched_spinlock;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2021-06-01 09:44:45 +02:00
|
|
|
static void update_cache(int preempt_ok);
|
2021-02-20 00:32:19 +01:00
|
|
|
static void end_thread(struct k_thread *thread);
|
2020-09-05 20:50:18 +02:00
|
|
|
|
2021-11-29 15:52:11 +01:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
static inline int is_preempt(struct k_thread *thread)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
|
|
|
/* explanation in kernel_struct.h */
|
|
|
|
return thread->base.preempt <= _PREEMPT_THRESHOLD;
|
|
|
|
}
|
|
|
|
|
2023-08-28 17:31:54 +02:00
|
|
|
BUILD_ASSERT(CONFIG_NUM_COOP_PRIORITIES >= CONFIG_NUM_METAIRQ_PRIORITIES,
|
|
|
|
"You need to provide at least as many CONFIG_NUM_COOP_PRIORITIES as "
|
|
|
|
"CONFIG_NUM_METAIRQ_PRIORITIES as Meta IRQs are just a special class of cooperative "
|
|
|
|
"threads.");
|
|
|
|
|
2018-05-11 23:02:42 +02:00
|
|
|
static inline int is_metairq(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
#if CONFIG_NUM_METAIRQ_PRIORITIES > 0
|
|
|
|
return (thread->base.prio - K_HIGHEST_THREAD_PRIO)
|
|
|
|
< CONFIG_NUM_METAIRQ_PRIORITIES;
|
|
|
|
#else
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(thread);
|
2018-05-11 23:02:42 +02:00
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-06-23 15:20:34 +02:00
|
|
|
#if CONFIG_ASSERT
|
2019-03-14 22:32:45 +01:00
|
|
|
static inline bool is_thread_dummy(struct k_thread *thread)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
2019-03-28 21:57:54 +01:00
|
|
|
return (thread->base.thread_state & _THREAD_DUMMY) != 0U;
|
2018-04-11 23:52:47 +02:00
|
|
|
}
|
2018-06-23 15:20:34 +02:00
|
|
|
#endif
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2021-03-01 18:19:57 +01:00
|
|
|
/*
|
|
|
|
* Return value same as e.g. memcmp
|
|
|
|
* > 0 -> thread 1 priority > thread 2 priority
|
|
|
|
* = 0 -> thread 1 priority == thread 2 priority
|
|
|
|
* < 0 -> thread 1 priority < thread 2 priority
|
|
|
|
* Do not rely on the actual value returned aside from the above.
|
|
|
|
* (Again, like memcmp.)
|
|
|
|
*/
|
|
|
|
int32_t z_sched_prio_cmp(struct k_thread *thread_1,
|
|
|
|
struct k_thread *thread_2)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2021-03-01 18:19:57 +01:00
|
|
|
/* `prio` is <32b, so the below cannot overflow. */
|
|
|
|
int32_t b1 = thread_1->base.prio;
|
|
|
|
int32_t b2 = thread_2->base.prio;
|
|
|
|
|
|
|
|
if (b1 != b2) {
|
|
|
|
return b2 - b1;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
2020-07-10 18:43:36 +02:00
|
|
|
/* If we assume all deadlines live within the same "half" of
|
|
|
|
* the 32 bit modulus space (this is a documented API rule),
|
2021-03-01 18:19:57 +01:00
|
|
|
* then the latest deadline in the queue minus the earliest is
|
2020-07-10 18:43:36 +02:00
|
|
|
* guaranteed to be (2's complement) non-negative. We can
|
|
|
|
* leverage that to compare the values without having to check
|
|
|
|
* the current time.
|
2018-05-15 20:06:25 +02:00
|
|
|
*/
|
2021-03-01 18:19:57 +01:00
|
|
|
uint32_t d1 = thread_1->base.prio_deadline;
|
|
|
|
uint32_t d2 = thread_2->base.prio_deadline;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2021-03-01 18:19:57 +01:00
|
|
|
if (d1 != d2) {
|
|
|
|
/* Sooner deadline means higher effective priority.
|
|
|
|
* Doing the calculation with unsigned types and casting
|
|
|
|
* to signed isn't perfect, but at least reduces this
|
|
|
|
* from UB on overflow to impdef.
|
|
|
|
*/
|
|
|
|
return (int32_t) (d2 - d1);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
#endif
|
2021-03-01 18:19:57 +01:00
|
|
|
return 0;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
static ALWAYS_INLINE bool should_preempt(struct k_thread *thread,
|
|
|
|
int preempt_ok)
|
2018-05-30 20:23:02 +02:00
|
|
|
{
|
2018-05-31 20:13:49 +02:00
|
|
|
/* Preemption is OK if it's being explicitly allowed by
|
|
|
|
* software state (e.g. the thread called k_yield())
|
2018-05-30 20:23:02 +02:00
|
|
|
*/
|
2018-11-22 01:22:15 +01:00
|
|
|
if (preempt_ok != 0) {
|
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-01-28 19:59:41 +01:00
|
|
|
__ASSERT(_current != NULL, "");
|
|
|
|
|
2018-05-31 20:13:49 +02:00
|
|
|
/* Or if we're pended/suspended/dummy (duh) */
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_thread_prevented_from_running(_current)) {
|
2019-01-04 21:52:17 +01:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Edge case on ARM where a thread can be pended out of an
|
|
|
|
* interrupt handler before the "synchronous" swap starts
|
|
|
|
* context switching. Platforms with atomic swap can never
|
|
|
|
* hit this.
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_SWAP_NONATOMIC)
|
2019-12-19 14:19:45 +01:00
|
|
|
&& z_is_thread_timeout_active(thread)) {
|
2018-11-22 01:22:15 +01:00
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Otherwise we have to be running a preemptible thread or
|
|
|
|
* switching to a metairq
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
if (is_preempt(_current) || is_metairq(thread)) {
|
2018-11-22 01:22:15 +01:00
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2018-11-22 01:22:15 +01:00
|
|
|
return false;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-01-31 00:00:42 +01:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK
|
|
|
|
static ALWAYS_INLINE struct k_thread *_priq_dumb_mask_best(sys_dlist_t *pq)
|
|
|
|
{
|
|
|
|
/* With masks enabled we need to be prepared to walk the list
|
|
|
|
* looking for one we can run
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread;
|
2019-01-31 00:00:42 +01:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
SYS_DLIST_FOR_EACH_CONTAINER(pq, thread, base.qnode_dlist) {
|
|
|
|
if ((thread->base.cpu_mask & BIT(_current_cpu->id)) != 0) {
|
|
|
|
return thread;
|
2019-01-31 00:00:42 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2023-01-06 21:51:16 +01:00
|
|
|
#if defined(CONFIG_SCHED_DUMB) || defined(CONFIG_WAITQ_DUMB)
|
2021-11-29 15:52:11 +01:00
|
|
|
static ALWAYS_INLINE void z_priq_dumb_add(sys_dlist_t *pq,
|
|
|
|
struct k_thread *thread)
|
2021-09-08 00:34:04 +02:00
|
|
|
{
|
|
|
|
struct k_thread *t;
|
|
|
|
|
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
|
|
|
|
|
|
|
SYS_DLIST_FOR_EACH_CONTAINER(pq, t, base.qnode_dlist) {
|
|
|
|
if (z_sched_prio_cmp(thread, t) > 0) {
|
|
|
|
sys_dlist_insert(&t->base.qnode_dlist,
|
|
|
|
&thread->base.qnode_dlist);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sys_dlist_append(pq, &thread->base.qnode_dlist);
|
|
|
|
}
|
2023-01-06 21:51:16 +01:00
|
|
|
#endif
|
2021-09-08 00:34:04 +02:00
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void *thread_runq(struct k_thread *thread)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
|
|
|
int cpu, m = thread->base.cpu_mask;
|
|
|
|
|
|
|
|
/* Edge case: it's legal per the API to "make runnable" a
|
|
|
|
* thread with all CPUs masked off (i.e. one that isn't
|
|
|
|
* actually runnable!). Sort of a wart in the API and maybe
|
|
|
|
* we should address this in docs/assertions instead to avoid
|
|
|
|
* the extra test.
|
|
|
|
*/
|
|
|
|
cpu = m == 0 ? 0 : u32_count_trailing_zeros(m);
|
|
|
|
|
|
|
|
return &_kernel.cpus[cpu].ready_q.runq;
|
|
|
|
#else
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(thread);
|
2021-09-24 19:57:39 +02:00
|
|
|
return &_kernel.ready_q.runq;
|
|
|
|
#endif
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void *curr_cpu_runq(void)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
|
|
|
return &arch_curr_cpu()->ready_q.runq;
|
|
|
|
#else
|
|
|
|
return &_kernel.ready_q.runq;
|
|
|
|
#endif
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 19:57:39 +02:00
|
|
|
static ALWAYS_INLINE void runq_add(struct k_thread *thread)
|
2021-09-24 03:44:40 +02:00
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
_priq_run_add(thread_runq(thread), thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ALWAYS_INLINE void runq_remove(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
_priq_run_remove(thread_runq(thread), thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
static ALWAYS_INLINE struct k_thread *runq_best(void)
|
|
|
|
{
|
|
|
|
return _priq_run_best(curr_cpu_runq());
|
2021-09-24 03:44:40 +02:00
|
|
|
}
|
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
/* _current is never in the run queue until context switch on
|
|
|
|
* SMP configurations, see z_requeue_current()
|
|
|
|
*/
|
|
|
|
static inline bool should_queue_thread(struct k_thread *th)
|
|
|
|
{
|
|
|
|
return !IS_ENABLED(CONFIG_SMP) || th != _current;
|
|
|
|
}
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
static ALWAYS_INLINE void queue_thread(struct k_thread *thread)
|
2021-02-07 22:03:09 +01:00
|
|
|
{
|
|
|
|
thread->base.thread_state |= _THREAD_QUEUED;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (should_queue_thread(thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_add(thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (thread == _current) {
|
|
|
|
/* add current to end of queue means "yield" */
|
|
|
|
_current_cpu->swap_ok = true;
|
|
|
|
}
|
|
|
|
#endif
|
2021-02-07 22:03:09 +01:00
|
|
|
}
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
static ALWAYS_INLINE void dequeue_thread(struct k_thread *thread)
|
2021-02-07 22:03:09 +01:00
|
|
|
{
|
|
|
|
thread->base.thread_state &= ~_THREAD_QUEUED;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (should_queue_thread(thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_remove(thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-04-06 19:10:17 +02:00
|
|
|
static void signal_pending_ipi(void)
|
|
|
|
{
|
|
|
|
/* Synchronization note: you might think we need to lock these
|
|
|
|
* two steps, but an IPI is idempotent. It's OK if we do it
|
|
|
|
* twice. All we require is that if a CPU sees the flag true,
|
|
|
|
* it is guaranteed to send the IPI, and if a core sets
|
|
|
|
* pending_ipi, the IPI will be sent the next time through
|
|
|
|
* this code.
|
|
|
|
*/
|
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
2022-10-18 18:11:46 +02:00
|
|
|
if (arch_num_cpus() > 1) {
|
2022-04-06 19:10:17 +02:00
|
|
|
if (_kernel.pending_ipi) {
|
|
|
|
_kernel.pending_ipi = false;
|
|
|
|
arch_sched_ipi();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/* Called out of z_swap() when CONFIG_SMP. The current thread can
|
|
|
|
* never live in the run queue until we are inexorably on the context
|
|
|
|
* switch path on SMP, otherwise there is a deadlock condition where a
|
|
|
|
* set of CPUs pick a cycle of threads to run and wait for them all to
|
|
|
|
* context switch forever.
|
|
|
|
*/
|
|
|
|
void z_requeue_current(struct k_thread *curr)
|
|
|
|
{
|
2021-02-18 19:15:23 +01:00
|
|
|
if (z_is_thread_queued(curr)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_add(curr);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2021-02-07 22:03:09 +01:00
|
|
|
}
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
static inline bool is_aborting(struct k_thread *thread)
|
|
|
|
{
|
2021-03-29 16:03:49 +02:00
|
|
|
return (thread->base.thread_state & _THREAD_ABORTING) != 0U;
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
2021-12-06 18:56:33 +01:00
|
|
|
#endif
|
2021-02-20 00:32:19 +01:00
|
|
|
|
2019-01-28 18:36:36 +01:00
|
|
|
static ALWAYS_INLINE struct k_thread *next_up(void)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
2023-07-24 14:42:52 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (is_aborting(_current)) {
|
|
|
|
end_thread(_current);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2021-09-24 03:44:40 +02:00
|
|
|
struct k_thread *thread = runq_best();
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2023-08-28 17:31:54 +02:00
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && \
|
|
|
|
(CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES)
|
2019-11-13 18:41:52 +01:00
|
|
|
/* MetaIRQs must always attempt to return back to a
|
|
|
|
* cooperative thread they preempted and not whatever happens
|
|
|
|
* to be highest priority now. The cooperative thread was
|
|
|
|
* promised it wouldn't be preempted (by non-metairq threads)!
|
|
|
|
*/
|
|
|
|
struct k_thread *mirqp = _current_cpu->metairq_preempted;
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (mirqp != NULL && (thread == NULL || !is_metairq(thread))) {
|
2019-11-13 18:41:52 +01:00
|
|
|
if (!z_is_thread_prevented_from_running(mirqp)) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = mirqp;
|
2019-11-13 18:41:52 +01:00
|
|
|
} else {
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
|
|
|
/* In uniprocessor mode, we can leave the current thread in
|
|
|
|
* the queue (actually we have to, otherwise the assembly
|
|
|
|
* context switch code for all architectures would be
|
2019-03-08 22:19:05 +01:00
|
|
|
* responsible for putting it back in z_swap and ISR return!),
|
2018-05-03 23:51:49 +02:00
|
|
|
* which makes this choice simple.
|
|
|
|
*/
|
2021-03-29 23:13:47 +02:00
|
|
|
return (thread != NULL) ? thread : _current_cpu->idle_thread;
|
2018-04-11 23:52:47 +02:00
|
|
|
#else
|
2018-05-03 23:51:49 +02:00
|
|
|
/* Under SMP, the "cache" mechanism for selecting the next
|
|
|
|
* thread doesn't work, so we have more work to do to test
|
2019-11-13 18:41:52 +01:00
|
|
|
* _current against the best choice from the queue. Here, the
|
|
|
|
* thread selected above represents "the best thread that is
|
|
|
|
* not current".
|
2018-05-30 20:23:02 +02:00
|
|
|
*
|
|
|
|
* Subtle note on "queued": in SMP mode, _current does not
|
|
|
|
* live in the queue, so this isn't exactly the same thing as
|
|
|
|
* "ready", it means "is _current already added back to the
|
|
|
|
* queue such that we don't want to re-add it".
|
2018-05-03 23:51:49 +02:00
|
|
|
*/
|
2022-07-19 22:30:17 +02:00
|
|
|
bool queued = z_is_thread_queued(_current);
|
|
|
|
bool active = !z_is_thread_prevented_from_running(_current);
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread == NULL) {
|
|
|
|
thread = _current_cpu->idle_thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
if (active) {
|
2021-03-01 18:19:57 +01:00
|
|
|
int32_t cmp = z_sched_prio_cmp(_current, thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
|
|
|
|
/* Ties only switch if state says we yielded */
|
2021-03-01 18:19:57 +01:00
|
|
|
if ((cmp > 0) || ((cmp == 0) && !_current_cpu->swap_ok)) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (!should_preempt(thread, _current_cpu->swap_ok)) {
|
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
}
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Put _current back into the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current && active &&
|
|
|
|
!z_is_idle_thread_object(_current) && !queued) {
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Take the new _current out of the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
_current_cpu->swap_ok = false;
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-04-11 23:52:47 +02:00
|
|
|
#endif
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2020-09-05 20:50:18 +02:00
|
|
|
static void move_thread_to_end_of_prio_q(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2020-09-05 20:50:18 +02:00
|
|
|
}
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2020-09-05 20:50:18 +02:00
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
2023-03-07 17:29:31 +01:00
|
|
|
static void flag_ipi(void)
|
|
|
|
{
|
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
|
|
|
if (arch_num_cpus() > 1) {
|
|
|
|
_kernel.pending_ipi = true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-09-25 19:56:09 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
|
2023-04-11 15:34:39 +02:00
|
|
|
static int slice_ticks = DIV_ROUND_UP(CONFIG_TIMESLICE_SIZE * Z_HZ_ticks, Z_HZ_ms);
|
2023-03-31 18:31:28 +02:00
|
|
|
static int slice_max_prio = CONFIG_TIMESLICE_PRIORITY;
|
2023-03-10 04:45:18 +01:00
|
|
|
static struct _timeout slice_timeouts[CONFIG_MP_MAX_NUM_CPUS];
|
|
|
|
static bool slice_expired[CONFIG_MP_MAX_NUM_CPUS];
|
2018-09-25 19:56:09 +02:00
|
|
|
|
2023-03-10 04:45:18 +01:00
|
|
|
#ifdef CONFIG_SWAP_NONATOMIC
|
|
|
|
/* If z_swap() isn't atomic, then it's possible for a timer interrupt
|
|
|
|
* to try to timeslice away _current after it has already pended
|
|
|
|
* itself but before the corresponding context switch. Treat that as
|
|
|
|
* a noop condition in z_time_slice().
|
|
|
|
*/
|
|
|
|
static struct k_thread *pending_current;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
static inline int slice_time(struct k_thread *thread)
|
2021-12-01 03:26:26 +01:00
|
|
|
{
|
|
|
|
int ret = slice_ticks;
|
|
|
|
|
|
|
|
#ifdef CONFIG_TIMESLICE_PER_THREAD
|
2023-03-10 04:45:18 +01:00
|
|
|
if (thread->base.slice_ticks != 0) {
|
|
|
|
ret = thread->base.slice_ticks;
|
2021-12-01 03:26:26 +01:00
|
|
|
}
|
2023-08-21 15:30:26 +02:00
|
|
|
#else
|
|
|
|
ARG_UNUSED(thread);
|
2021-12-01 03:26:26 +01:00
|
|
|
#endif
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2023-03-10 04:45:18 +01:00
|
|
|
static inline bool sliceable(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
bool ret = is_preempt(thread)
|
|
|
|
&& slice_time(thread) != 0
|
|
|
|
&& !z_is_prio_higher(thread->base.prio, slice_max_prio)
|
|
|
|
&& !z_is_thread_prevented_from_running(thread)
|
|
|
|
&& !z_is_idle_thread_object(thread);
|
|
|
|
|
|
|
|
#ifdef CONFIG_TIMESLICE_PER_THREAD
|
|
|
|
ret |= thread->base.slice_ticks != 0;
|
2019-01-04 21:54:23 +01:00
|
|
|
#endif
|
|
|
|
|
2023-03-10 04:45:18 +01:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2023-03-06 23:31:35 +01:00
|
|
|
static void slice_timeout(struct _timeout *t)
|
|
|
|
{
|
|
|
|
int cpu = ARRAY_INDEX(slice_timeouts, t);
|
|
|
|
|
|
|
|
slice_expired[cpu] = true;
|
2023-03-07 17:29:31 +01:00
|
|
|
|
|
|
|
/* We need an IPI if we just handled a timeslice expiration
|
|
|
|
* for a different CPU. Ideally this would be able to target
|
|
|
|
* the specific core, but that's not part of the API yet.
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_SMP) && cpu != _current_cpu->id) {
|
|
|
|
flag_ipi();
|
|
|
|
}
|
2023-03-06 23:31:35 +01:00
|
|
|
}
|
|
|
|
|
2021-12-01 03:26:26 +01:00
|
|
|
void z_reset_time_slice(struct k_thread *curr)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2023-03-06 23:31:35 +01:00
|
|
|
int cpu = _current_cpu->id;
|
|
|
|
|
|
|
|
z_abort_timeout(&slice_timeouts[cpu]);
|
2023-03-10 04:45:18 +01:00
|
|
|
slice_expired[cpu] = false;
|
|
|
|
if (sliceable(curr)) {
|
2023-03-06 23:31:35 +01:00
|
|
|
z_add_timeout(&slice_timeouts[cpu], slice_timeout,
|
|
|
|
K_TICKS(slice_time(curr) - 1));
|
2019-06-16 04:32:04 +02:00
|
|
|
}
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
void k_sched_time_slice_set(int32_t slice, int prio)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-12-01 03:26:26 +01:00
|
|
|
slice_ticks = k_ms_to_ticks_ceil32(slice);
|
2018-10-15 20:10:49 +02:00
|
|
|
slice_max_prio = prio;
|
2021-12-01 03:26:26 +01:00
|
|
|
z_reset_time_slice(_current);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_TIMESLICE_PER_THREAD
|
2023-08-03 19:28:01 +02:00
|
|
|
void k_thread_time_slice_set(struct k_thread *th, int32_t thread_slice_ticks,
|
2021-12-01 03:26:26 +01:00
|
|
|
k_thread_timeslice_fn_t expired, void *data)
|
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2023-08-03 19:28:01 +02:00
|
|
|
th->base.slice_ticks = thread_slice_ticks;
|
2021-12-01 03:26:26 +01:00
|
|
|
th->base.slice_expired = expired;
|
|
|
|
th->base.slice_data = data;
|
2018-10-15 20:10:49 +02:00
|
|
|
}
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2021-12-01 03:26:26 +01:00
|
|
|
#endif
|
2018-09-25 19:56:09 +02:00
|
|
|
|
|
|
|
/* Called out of each timer interrupt */
|
2023-03-06 23:31:35 +01:00
|
|
|
void z_time_slice(void)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2020-09-05 20:50:18 +02:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
2023-03-10 04:45:18 +01:00
|
|
|
struct k_thread *curr = _current;
|
2020-09-05 20:50:18 +02:00
|
|
|
|
2019-01-04 21:54:23 +01:00
|
|
|
#ifdef CONFIG_SWAP_NONATOMIC
|
2023-03-10 04:45:18 +01:00
|
|
|
if (pending_current == curr) {
|
|
|
|
z_reset_time_slice(curr);
|
2020-09-05 20:50:18 +02:00
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
2019-01-04 21:54:23 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
pending_current = NULL;
|
|
|
|
#endif
|
|
|
|
|
2023-03-10 04:45:18 +01:00
|
|
|
if (slice_expired[_current_cpu->id] && sliceable(curr)) {
|
|
|
|
#ifdef CONFIG_TIMESLICE_PER_THREAD
|
|
|
|
if (curr->base.slice_expired) {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
curr->base.slice_expired(curr, curr->base.slice_data);
|
|
|
|
key = k_spin_lock(&sched_spinlock);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
if (!z_is_thread_prevented_from_running(curr)) {
|
|
|
|
move_thread_to_end_of_prio_q(curr);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2023-03-10 04:45:18 +01:00
|
|
|
z_reset_time_slice(curr);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2020-09-05 20:50:18 +02:00
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Track cooperative threads preempted by metairqs so we can return to
|
|
|
|
* them specifically. Called at the moment a new thread has been
|
|
|
|
* selected to run.
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
static void update_metairq_preempt(struct k_thread *thread)
|
2019-11-13 18:41:52 +01:00
|
|
|
{
|
2023-08-28 17:31:54 +02:00
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && \
|
|
|
|
(CONFIG_NUM_COOP_PRIORITIES > CONFIG_NUM_METAIRQ_PRIORITIES)
|
2019-12-19 14:19:45 +01:00
|
|
|
if (is_metairq(thread) && !is_metairq(_current) &&
|
|
|
|
!is_preempt(_current)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Record new preemption */
|
|
|
|
_current_cpu->metairq_preempted = _current;
|
2020-09-02 18:20:38 +02:00
|
|
|
} else if (!is_metairq(thread) && !z_is_idle_thread_object(thread)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Returning from existing preemption */
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
2023-08-21 15:30:26 +02:00
|
|
|
#else
|
|
|
|
ARG_UNUSED(thread);
|
2019-11-13 18:41:52 +01:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
static void update_cache(int preempt_ok)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = next_up();
|
2018-05-21 20:48:35 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (should_preempt(thread, preempt_ok)) {
|
2019-08-17 06:29:26 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current) {
|
2021-12-01 03:26:26 +01:00
|
|
|
z_reset_time_slice(thread);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2019-08-17 06:29:26 +02:00
|
|
|
#endif
|
2019-12-19 14:19:45 +01:00
|
|
|
update_metairq_preempt(thread);
|
|
|
|
_kernel.ready_q.cache = thread;
|
2018-05-30 20:23:02 +02:00
|
|
|
} else {
|
|
|
|
_kernel.ready_q.cache = _current;
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
|
|
|
#else
|
|
|
|
/* The way this works is that the CPU record keeps its
|
|
|
|
* "cooperative swapping is OK" flag until the next reschedule
|
|
|
|
* call or context switch. It doesn't need to be tracked per
|
|
|
|
* thread because if the thread gets preempted for whatever
|
|
|
|
* reason the scheduler will make the same decision anyway.
|
|
|
|
*/
|
|
|
|
_current_cpu->swap_ok = preempt_ok;
|
2018-05-03 23:51:49 +02:00
|
|
|
#endif
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-02-20 00:24:24 +01:00
|
|
|
static bool thread_active_elsewhere(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
/* True if the thread is currently running on another CPU.
|
|
|
|
* There are more scalable designs to answer this question in
|
|
|
|
* constant time, but this is fine for now.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
int currcpu = _current_cpu->id;
|
|
|
|
|
2022-10-18 16:45:13 +02:00
|
|
|
unsigned int num_cpus = arch_num_cpus();
|
|
|
|
|
|
|
|
for (int i = 0; i < num_cpus; i++) {
|
2021-02-20 00:24:24 +01:00
|
|
|
if ((i != currcpu) &&
|
|
|
|
(_kernel.cpus[i].current == thread)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(thread);
|
2021-02-20 00:24:24 +01:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
static void ready_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2020-12-07 19:15:42 +01:00
|
|
|
#ifdef CONFIG_KERNEL_COHERENCE
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
__ASSERT_NO_MSG(arch_mem_coherent(thread));
|
|
|
|
#endif
|
|
|
|
|
2020-10-17 02:00:17 +02:00
|
|
|
/* If thread is queued already, do not try and added it to the
|
|
|
|
* run queue again
|
|
|
|
*/
|
|
|
|
if (!z_is_thread_queued(thread) && z_is_thread_ready(thread)) {
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, sched_ready, thread);
|
|
|
|
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(0);
|
2022-04-06 18:58:20 +02:00
|
|
|
flag_ipi();
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_ready_thread(struct k_thread *thread)
|
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-02-20 00:24:24 +01:00
|
|
|
if (!thread_active_elsewhere(thread)) {
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
2020-01-23 22:28:30 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_move_thread_to_end_of_prio_q(struct k_thread *thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2020-09-05 20:50:18 +02:00
|
|
|
move_thread_to_end_of_prio_q(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_sched_start(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
|
|
|
if (z_has_thread_started(thread)) {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
z_mark_thread_as_started(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
z_reschedule(&sched_spinlock, key);
|
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
void z_impl_k_thread_suspend(struct k_thread *thread)
|
2020-01-07 18:58:46 +01:00
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, suspend, thread);
|
|
|
|
|
2020-01-07 18:58:46 +01:00
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2020-01-07 18:58:46 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
}
|
|
|
|
z_mark_thread_as_suspended(thread);
|
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (thread == _current) {
|
|
|
|
z_reschedule_unlocked();
|
|
|
|
}
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, suspend, thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_suspend(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_thread_suspend(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_suspend_mrsh.c>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void z_impl_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, resume, thread);
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
2020-10-17 01:53:56 +02:00
|
|
|
/* Do not try to resume a thread that was not suspended */
|
|
|
|
if (!z_is_thread_suspended(thread)) {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
|
|
|
|
z_reschedule(&sched_spinlock, key);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, resume, thread);
|
2020-02-14 19:52:49 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_thread_resume(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_resume_mrsh.c>
|
|
|
|
#endif
|
|
|
|
|
2021-05-25 08:40:14 +02:00
|
|
|
static _wait_q_t *pended_on_thread(struct k_thread *thread)
|
2020-01-07 18:58:46 +01:00
|
|
|
{
|
|
|
|
__ASSERT_NO_MSG(thread->base.pended_on);
|
|
|
|
|
|
|
|
return thread->base.pended_on;
|
|
|
|
}
|
|
|
|
|
2020-01-23 22:04:15 +01:00
|
|
|
static void unready_thread(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2020-01-23 22:04:15 +01:00
|
|
|
}
|
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
/* sched_spinlock must be held */
|
|
|
|
static void add_to_waitq_locked(struct k_thread *thread, _wait_q_t *wait_q)
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
{
|
2020-02-21 01:33:06 +01:00
|
|
|
unready_thread(thread);
|
|
|
|
z_mark_thread_as_pending(thread);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_pend, thread);
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
if (wait_q != NULL) {
|
|
|
|
thread->base.pended_on = wait_q;
|
|
|
|
z_priq_wait_add(&wait_q->waitq, thread);
|
2018-09-26 22:19:31 +02:00
|
|
|
}
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
2018-09-26 22:19:31 +02:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (!K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
|
|
|
z_add_thread_timeout(thread, timeout);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2022-10-08 16:24:28 +02:00
|
|
|
static void pend_locked(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
2020-12-07 19:15:42 +01:00
|
|
|
#ifdef CONFIG_KERNEL_COHERENCE
|
2021-02-09 22:48:25 +01:00
|
|
|
__ASSERT_NO_MSG(wait_q == NULL || arch_mem_coherent(wait_q));
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
#endif
|
2022-10-08 16:24:28 +02:00
|
|
|
add_to_waitq_locked(thread, wait_q);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
add_thread_timeout(thread, timeout);
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2022-10-08 16:24:28 +02:00
|
|
|
pend_locked(thread, wait_q, timeout);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-03-09 21:17:45 +01:00
|
|
|
|
2020-09-05 20:44:01 +02:00
|
|
|
static inline void unpend_thread_no_timeout(struct k_thread *thread)
|
|
|
|
{
|
2021-05-25 08:40:14 +02:00
|
|
|
_priq_wait_remove(&pended_on_thread(thread)->waitq, thread);
|
2020-09-05 20:44:01 +02:00
|
|
|
z_mark_thread_as_not_pending(thread);
|
|
|
|
thread->base.pended_on = NULL;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_unpend_thread_no_timeout(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2023-01-06 19:20:28 +01:00
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2023-03-08 22:54:12 +01:00
|
|
|
void z_sched_wake_thread(struct k_thread *thread, bool is_timeout)
|
2018-09-28 01:50:00 +02:00
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-02-17 19:12:36 +01:00
|
|
|
bool killed = ((thread->base.thread_state & _THREAD_DEAD) ||
|
|
|
|
(thread->base.thread_state & _THREAD_ABORTING));
|
2018-09-28 01:50:00 +02:00
|
|
|
|
2023-03-08 22:56:31 +01:00
|
|
|
#ifdef CONFIG_EVENTS
|
|
|
|
bool do_nothing = thread->no_wake_on_timeout && is_timeout;
|
|
|
|
|
|
|
|
thread->no_wake_on_timeout = false;
|
|
|
|
|
|
|
|
if (do_nothing) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2021-02-17 19:12:36 +01:00
|
|
|
if (!killed) {
|
2023-03-08 22:54:12 +01:00
|
|
|
/* The thread is not being killed */
|
2021-02-17 19:12:36 +01:00
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
z_mark_thread_as_started(thread);
|
2023-03-08 22:54:12 +01:00
|
|
|
if (is_timeout) {
|
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
}
|
2021-02-17 19:12:36 +01:00
|
|
|
ready_thread(thread);
|
2020-09-05 20:44:01 +02:00
|
|
|
}
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
2023-03-08 22:54:12 +01:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYS_CLOCK_EXISTS
|
|
|
|
/* Timeout handler for *_thread_timeout() APIs */
|
|
|
|
void z_thread_timeout(struct _timeout *timeout)
|
|
|
|
{
|
|
|
|
struct k_thread *thread = CONTAINER_OF(timeout,
|
|
|
|
struct k_thread, base.timeout);
|
|
|
|
|
|
|
|
z_sched_wake_thread(thread, true);
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int z_pend_curr_irqlock(uint32_t key, _wait_q_t *wait_q, k_timeout_t timeout)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2022-10-08 16:24:28 +02:00
|
|
|
/* This is a legacy API for pre-switch architectures and isn't
|
|
|
|
* correctly synchronized for multi-cpu use
|
|
|
|
*/
|
|
|
|
__ASSERT_NO_MSG(!IS_ENABLED(CONFIG_SMP));
|
|
|
|
|
|
|
|
pend_locked(_current, wait_q, timeout);
|
2019-03-14 21:50:16 +01:00
|
|
|
|
2019-01-04 21:54:23 +01:00
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
2019-03-14 21:50:16 +01:00
|
|
|
|
|
|
|
int ret = z_swap_irqlock(key);
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2019-03-14 21:50:16 +01:00
|
|
|
if (pending_current == _current) {
|
|
|
|
pending_current = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
#else
|
2019-03-08 22:19:05 +01:00
|
|
|
return z_swap_irqlock(key);
|
2019-03-14 21:50:16 +01:00
|
|
|
#endif
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
_wait_q_t *wait_q, k_timeout_t timeout)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
|
|
|
#endif
|
2022-10-08 16:24:28 +02:00
|
|
|
__ASSERT_NO_MSG(sizeof(sched_spinlock) == 0 || lock != &sched_spinlock);
|
|
|
|
|
|
|
|
/* We do a "lock swap" prior to calling z_swap(), such that
|
|
|
|
* the caller's lock gets released as desired. But we ensure
|
|
|
|
* that we hold the scheduler lock and leave local interrupts
|
|
|
|
* masked until we reach the context swich. z_swap() itself
|
|
|
|
* has similar code; the duplication is because it's a legacy
|
|
|
|
* API that doesn't expect to be called with scheduler lock
|
|
|
|
* held.
|
|
|
|
*/
|
|
|
|
(void) k_spin_lock(&sched_spinlock);
|
|
|
|
pend_locked(_current, wait_q, timeout);
|
|
|
|
k_spin_release(lock);
|
|
|
|
return z_swap(&sched_spinlock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
|
|
|
|
2021-02-10 01:47:47 +01:00
|
|
|
struct k_thread *z_unpend1_no_timeout(_wait_q_t *wait_q)
|
|
|
|
{
|
|
|
|
struct k_thread *thread = NULL;
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-02-10 01:47:47 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return thread;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_unpend_first_thread(_wait_q_t *wait_q)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-02-10 01:47:47 +01:00
|
|
|
struct k_thread *thread = NULL;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-02-10 01:47:47 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_unpend_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
z_unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
/* Priority set utility that does no rescheduling, it just changes the
|
|
|
|
* run queue state, returning true if a reschedule is needed later.
|
|
|
|
*/
|
|
|
|
bool z_set_prio(struct k_thread *thread, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-09-21 00:43:57 +02:00
|
|
|
bool need_sched = 0;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2019-03-08 22:19:05 +01:00
|
|
|
need_sched = z_is_thread_ready(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
if (need_sched) {
|
2019-07-01 19:25:55 +02:00
|
|
|
/* Don't requeue on SMP if it's the running thread */
|
|
|
|
if (!IS_ENABLED(CONFIG_SMP) || z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2019-07-01 19:25:55 +02:00
|
|
|
thread->base.prio = prio;
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(thread);
|
2019-07-01 19:25:55 +02:00
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(1);
|
2018-05-03 23:51:49 +02:00
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
|
|
|
}
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, sched_priority_set, thread, prio);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
return need_sched;
|
|
|
|
}
|
|
|
|
|
|
|
|
void z_thread_priority_set(struct k_thread *thread, int prio)
|
|
|
|
{
|
|
|
|
bool need_sched = z_set_prio(thread, prio);
|
|
|
|
|
2022-04-06 18:58:20 +02:00
|
|
|
flag_ipi();
|
2020-02-04 22:52:09 +01:00
|
|
|
|
2021-03-29 16:03:49 +02:00
|
|
|
if (need_sched && _current->base.sched_locked == 0U) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
static inline bool resched(uint32_t key)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
_current_cpu->swap_ok = 0;
|
|
|
|
#endif
|
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
return arch_irq_unlocked(key) && !arch_is_in_isr();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
2020-08-10 21:47:02 +02:00
|
|
|
/*
|
|
|
|
* Check if the next ready thread is the same as the current thread
|
|
|
|
* and save the trip if true.
|
|
|
|
*/
|
|
|
|
static inline bool need_swap(void)
|
|
|
|
{
|
|
|
|
/* the SMP case will be handled in C based z_swap() */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return true;
|
|
|
|
#else
|
|
|
|
struct k_thread *new_thread;
|
|
|
|
|
|
|
|
/* Check if the next ready thread is the same as the current thread */
|
2021-02-18 19:15:23 +01:00
|
|
|
new_thread = _kernel.ready_q.cache;
|
2020-08-10 21:47:02 +02:00
|
|
|
return new_thread != _current;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2020-08-10 21:47:02 +02:00
|
|
|
if (resched(key.key) && need_swap()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap(lock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
k_spin_unlock(lock, key);
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
void z_reschedule_irqlock(uint32_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2019-05-24 19:09:13 +02:00
|
|
|
if (resched(key)) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap_irqlock(key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
irq_unlock(key);
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-26 19:54:40 +02:00
|
|
|
}
|
|
|
|
|
2016-11-10 20:46:58 +01:00
|
|
|
void k_sched_lock(void)
|
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_lock);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_sched_lock();
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2016-11-10 20:46:58 +01:00
|
|
|
}
|
|
|
|
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
void k_sched_unlock(void)
|
|
|
|
{
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-03-29 16:03:49 +02:00
|
|
|
__ASSERT(_current->base.sched_locked != 0U, "");
|
2020-02-06 22:39:52 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
++_current->base.sched_locked;
|
2019-07-31 04:19:08 +02:00
|
|
|
update_cache(0);
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-12-02 16:24:08 +01:00
|
|
|
LOG_DBG("scheduler unlocked (%p:%d)",
|
2016-11-18 22:08:24 +01:00
|
|
|
_current, _current->base.sched_locked);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_unlock);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2021-02-18 19:15:23 +01:00
|
|
|
struct k_thread *z_swap_next_thread(void)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2021-02-18 19:15:23 +01:00
|
|
|
#ifdef CONFIG_SMP
|
2022-04-06 19:10:17 +02:00
|
|
|
struct k_thread *ret = next_up();
|
|
|
|
|
|
|
|
if (ret == _current) {
|
|
|
|
/* When not swapping, have to signal IPIs here. In
|
|
|
|
* the context switch case it must happen later, after
|
|
|
|
* _current gets requeued.
|
|
|
|
*/
|
|
|
|
signal_pending_ipi();
|
|
|
|
}
|
|
|
|
return ret;
|
2021-02-18 19:15:23 +01:00
|
|
|
#else
|
|
|
|
return _kernel.ready_q.cache;
|
2018-05-03 23:51:49 +02:00
|
|
|
#endif
|
2021-02-18 19:15:23 +01:00
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2021-12-06 18:56:33 +01:00
|
|
|
#ifdef CONFIG_USE_SWITCH
|
2019-02-20 02:24:30 +01:00
|
|
|
/* Just a wrapper around _current = xxx with tracing */
|
|
|
|
static inline void set_current(struct k_thread *new_thread)
|
|
|
|
{
|
2020-08-28 01:12:01 +02:00
|
|
|
z_thread_mark_switched_out();
|
2020-02-06 22:39:52 +01:00
|
|
|
_current_cpu->current = new_thread;
|
2019-02-20 02:24:30 +01:00
|
|
|
}
|
|
|
|
|
2022-03-16 03:36:20 +01:00
|
|
|
/**
|
|
|
|
* @brief Determine next thread to execute upon completion of an interrupt
|
|
|
|
*
|
|
|
|
* Thread preemption is performed by context switching after the completion
|
|
|
|
* of a non-recursed interrupt. This function determines which thread to
|
|
|
|
* switch to if any. This function accepts as @p interrupted either:
|
|
|
|
*
|
|
|
|
* - The handle for the interrupted thread in which case the thread's context
|
|
|
|
* must already be fully saved and ready to be picked up by a different CPU.
|
|
|
|
*
|
|
|
|
* - NULL if more work is required to fully save the thread's state after
|
|
|
|
* it is known that a new thread is to be scheduled. It is up to the caller
|
|
|
|
* to store the handle resulting from the thread that is being switched out
|
|
|
|
* in that thread's "switch_handle" field after its
|
|
|
|
* context has fully been saved, following the same requirements as with
|
|
|
|
* the @ref arch_switch() function.
|
|
|
|
*
|
|
|
|
* If a new thread needs to be scheduled then its handle is returned.
|
|
|
|
* Otherwise the same value provided as @p interrupted is returned back.
|
|
|
|
* Those handles are the same opaque types used by the @ref arch_switch()
|
|
|
|
* function.
|
|
|
|
*
|
|
|
|
* @warning
|
|
|
|
* The @ref _current value may have changed after this call and not refer
|
|
|
|
* to the interrupted thread anymore. It might be necessary to make a local
|
|
|
|
* copy before calling this function.
|
|
|
|
*
|
|
|
|
* @param interrupted Handle for the thread that was interrupted or NULL.
|
|
|
|
* @retval Handle for the next thread to execute, or @p interrupted when
|
|
|
|
* no new thread is to be scheduled.
|
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
void *z_get_next_switch_handle(void *interrupted)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-30 00:25:27 +01:00
|
|
|
z_check_stack_sentinel();
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
2021-02-05 17:15:02 +01:00
|
|
|
void *ret = NULL;
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
struct k_thread *old_thread = _current, *new_thread;
|
2018-05-30 20:23:02 +02:00
|
|
|
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (IS_ENABLED(CONFIG_SMP)) {
|
|
|
|
old_thread->switch_handle = NULL;
|
|
|
|
}
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
new_thread = next_up();
|
|
|
|
|
kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-27 17:22:43 +02:00
|
|
|
z_sched_usage_switch(new_thread);
|
|
|
|
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
if (old_thread != new_thread) {
|
|
|
|
update_metairq_preempt(new_thread);
|
2023-05-26 18:12:51 +02:00
|
|
|
z_sched_switch_spin(new_thread);
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
arch_cohere_stacks(old_thread, interrupted, new_thread);
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
_current_cpu->swap_ok = 0;
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
set_current(new_thread);
|
|
|
|
|
2021-12-01 03:26:26 +01:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
z_reset_time_slice(new_thread);
|
|
|
|
#endif
|
|
|
|
|
2019-12-13 11:24:56 +01:00
|
|
|
#ifdef CONFIG_SPIN_VALIDATE
|
2019-02-20 19:07:31 +01:00
|
|
|
/* Changed _current! Update the spinlock
|
2021-04-30 15:58:20 +02:00
|
|
|
* bookkeeping so the validation doesn't get
|
2019-02-20 19:07:31 +01:00
|
|
|
* confused when the "wrong" thread tries to
|
|
|
|
* release the lock.
|
|
|
|
*/
|
|
|
|
z_spin_lock_set_owner(&sched_spinlock);
|
|
|
|
#endif
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
|
|
|
|
/* A queued (runnable) old/current thread
|
|
|
|
* needs to be added back to the run queue
|
|
|
|
* here, and atomically with its switch handle
|
|
|
|
* being set below. This is safe now, as we
|
|
|
|
* will not return into it.
|
|
|
|
*/
|
|
|
|
if (z_is_thread_queued(old_thread)) {
|
2021-09-24 03:44:40 +02:00
|
|
|
runq_add(old_thread);
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
old_thread->switch_handle = interrupted;
|
2021-02-05 17:15:02 +01:00
|
|
|
ret = new_thread->switch_handle;
|
kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-08 17:28:54 +01:00
|
|
|
if (IS_ENABLED(CONFIG_SMP)) {
|
|
|
|
/* Active threads MUST have a null here */
|
|
|
|
new_thread->switch_handle = NULL;
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2022-04-06 19:10:17 +02:00
|
|
|
signal_pending_ipi();
|
2021-02-05 17:15:02 +01:00
|
|
|
return ret;
|
2018-05-30 20:23:02 +02:00
|
|
|
#else
|
kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-27 17:22:43 +02:00
|
|
|
z_sched_usage_switch(_kernel.ready_q.cache);
|
kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches. Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.
Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory. Where this is
available, place all writable data sections by default into the
coherent region. An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.
Stack memory will be incoherent by default, as by definition it is
local to its current thread. This requires special cache management
on context switch, so an arch API has been added for that.
Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent. We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks. In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-05-13 17:34:04 +02:00
|
|
|
_current->switch_handle = interrupted;
|
2021-02-18 19:15:23 +01:00
|
|
|
set_current(_kernel.ready_q.cache);
|
2018-05-03 23:51:49 +02:00
|
|
|
return _current->switch_handle;
|
2021-02-05 17:15:02 +01:00
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.
- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
shift since they are power-of-two factors of 1000.
In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.
These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).
Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.
Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-20 20:39:08 +01:00
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_dumb_remove(sys_dlist_t *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2023-08-21 15:30:26 +02:00
|
|
|
ARG_UNUSED(pq);
|
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
sys_dlist_remove(&thread->base.qnode_dlist);
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_dumb_best(sys_dlist_t *pq)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-11-16 00:03:32 +01:00
|
|
|
sys_dnode_t *n = sys_dlist_peek_head(pq);
|
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_dlist);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
bool z_priq_rb_lessthan(struct rbnode *a, struct rbnode *b)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread_a, *thread_b;
|
2021-03-01 18:19:57 +01:00
|
|
|
int32_t cmp;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
thread_a = CONTAINER_OF(a, struct k_thread, base.qnode_rb);
|
|
|
|
thread_b = CONTAINER_OF(b, struct k_thread, base.qnode_rb);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2021-03-01 18:19:57 +01:00
|
|
|
cmp = z_sched_prio_cmp(thread_a, thread_b);
|
|
|
|
|
|
|
|
if (cmp > 0) {
|
2018-09-21 00:43:57 +02:00
|
|
|
return true;
|
2021-03-01 18:19:57 +01:00
|
|
|
} else if (cmp < 0) {
|
2018-09-21 00:43:57 +02:00
|
|
|
return false;
|
2018-05-03 23:51:49 +02:00
|
|
|
} else {
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread_a->base.order_key < thread_b->base.order_key
|
|
|
|
? 1 : 0;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_rb_add(struct _priq_rb *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
|
|
|
struct k_thread *t;
|
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
thread->base.order_key = pq->next_order_key++;
|
|
|
|
|
|
|
|
/* Renumber at wraparound. This is tiny code, and in practice
|
|
|
|
* will almost never be hit on real systems. BUT on very
|
|
|
|
* long-running systems where a priq never completely empties
|
|
|
|
* AND that contains very large numbers of threads, it can be
|
|
|
|
* a latency glitch to loop over all the threads like this.
|
|
|
|
*/
|
|
|
|
if (!pq->next_order_key) {
|
|
|
|
RB_FOR_EACH_CONTAINER(&pq->tree, t, base.qnode_rb) {
|
|
|
|
t->base.order_key = pq->next_order_key++;
|
2016-12-24 01:34:41 +01:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
rb_insert(&pq->tree, &thread->base.qnode_rb);
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_rb_remove(struct _priq_rb *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
rb_remove(&pq->tree, &thread->base.qnode_rb);
|
2016-11-24 04:15:44 +01:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
if (!pq->tree.root) {
|
|
|
|
pq->next_order_key = 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_rb_best(struct _priq_rb *pq)
|
2018-04-03 03:24:58 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-05-03 23:51:49 +02:00
|
|
|
struct rbnode *n = rb_get_min(&pq->tree);
|
2018-04-03 03:24:58 +02:00
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_rb);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-04-03 03:24:58 +02:00
|
|
|
}
|
|
|
|
|
2018-06-28 19:38:14 +02:00
|
|
|
#ifdef CONFIG_SCHED_MULTIQ
|
|
|
|
# if (K_LOWEST_THREAD_PRIO - K_HIGHEST_THREAD_PRIO) > 31
|
|
|
|
# error Too many priorities for multiqueue scheduler (max 32)
|
|
|
|
# endif
|
|
|
|
|
2021-11-29 15:52:11 +01:00
|
|
|
static ALWAYS_INLINE void z_priq_mq_add(struct _priq_mq *pq,
|
|
|
|
struct k_thread *thread)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
|
|
|
int priority_bit = thread->base.prio - K_HIGHEST_THREAD_PRIO;
|
|
|
|
|
|
|
|
sys_dlist_append(&pq->queues[priority_bit], &thread->base.qnode_dlist);
|
2019-02-26 19:14:04 +01:00
|
|
|
pq->bitmask |= BIT(priority_bit);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
|
2021-11-29 15:52:11 +01:00
|
|
|
static ALWAYS_INLINE void z_priq_mq_remove(struct _priq_mq *pq,
|
|
|
|
struct k_thread *thread)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
|
|
|
int priority_bit = thread->base.prio - K_HIGHEST_THREAD_PRIO;
|
|
|
|
|
|
|
|
sys_dlist_remove(&thread->base.qnode_dlist);
|
|
|
|
if (sys_dlist_is_empty(&pq->queues[priority_bit])) {
|
2019-02-26 19:14:04 +01:00
|
|
|
pq->bitmask &= ~BIT(priority_bit);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
}
|
2021-12-21 00:24:30 +01:00
|
|
|
#endif
|
2018-06-28 19:38:14 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_mq_best(struct _priq_mq *pq)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
|
|
|
if (!pq->bitmask) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-06-28 19:38:14 +02:00
|
|
|
sys_dlist_t *l = &pq->queues[__builtin_ctz(pq->bitmask)];
|
2018-11-16 00:03:32 +01:00
|
|
|
sys_dnode_t *n = sys_dlist_peek_head(l);
|
2018-06-28 19:38:14 +02:00
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_dlist);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_unpend_all(_wait_q_t *wait_q)
|
2018-05-10 18:45:42 +02:00
|
|
|
{
|
2018-05-10 20:10:34 +02:00
|
|
|
int need_sched = 0;
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread;
|
2018-05-10 18:45:42 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
while ((thread = z_waitq_head(wait_q)) != NULL) {
|
|
|
|
z_unpend_thread(thread);
|
|
|
|
z_ready_thread(thread);
|
2018-05-10 18:45:42 +02:00
|
|
|
need_sched = 1;
|
|
|
|
}
|
2018-05-10 20:10:34 +02:00
|
|
|
|
|
|
|
return need_sched;
|
2018-05-10 18:45:42 +02:00
|
|
|
}
|
|
|
|
|
2021-09-24 22:49:14 +02:00
|
|
|
void init_ready_q(struct _ready_q *rq)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-09-24 22:49:14 +02:00
|
|
|
#if defined(CONFIG_SCHED_SCALABLE)
|
|
|
|
rq->runq = (struct _priq_rb) {
|
2018-05-03 23:51:49 +02:00
|
|
|
.tree = {
|
2019-03-08 22:19:05 +01:00
|
|
|
.lessthan_fn = z_priq_rb_lessthan,
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
};
|
2021-09-24 22:49:14 +02:00
|
|
|
#elif defined(CONFIG_SCHED_MULTIQ)
|
2018-06-28 19:38:14 +02:00
|
|
|
for (int i = 0; i < ARRAY_SIZE(_kernel.ready_q.runq.queues); i++) {
|
2021-09-24 22:49:14 +02:00
|
|
|
sys_dlist_init(&rq->runq.queues[i]);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
2021-09-24 22:49:14 +02:00
|
|
|
#else
|
|
|
|
sys_dlist_init(&rq->runq);
|
2018-06-28 19:38:14 +02:00
|
|
|
#endif
|
2021-09-24 22:49:14 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
void z_sched_init(void)
|
|
|
|
{
|
2021-09-24 19:57:39 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
2023-03-16 22:54:25 +01:00
|
|
|
for (int i = 0; i < CONFIG_MP_MAX_NUM_CPUS; i++) {
|
2021-09-24 19:57:39 +02:00
|
|
|
init_ready_q(&_kernel.cpus[i].ready_q);
|
|
|
|
}
|
|
|
|
#else
|
2021-09-24 22:49:14 +02:00
|
|
|
init_ready_q(&_kernel.ready_q);
|
2021-09-24 19:57:39 +02:00
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_impl_k_thread_priority_get(k_tid_t thread)
|
2016-10-07 20:41:34 +02:00
|
|
|
{
|
2016-11-08 16:36:50 +01:00
|
|
|
return thread->base.prio;
|
2016-10-07 20:41:34 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline int z_vrfy_k_thread_priority_get(k_tid_t thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
return z_impl_k_thread_priority_get(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_priority_get_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2021-03-29 16:54:23 +02:00
|
|
|
void z_impl_k_thread_priority_set(k_tid_t thread, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2016-11-08 21:44:05 +01:00
|
|
|
/*
|
|
|
|
* Use NULL, since we cannot know what the entry point is (we do not
|
|
|
|
* keep track of it) and idle cannot change its priority.
|
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
Z_ASSERT_VALID_PRIO(prio, NULL);
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-29 16:54:23 +02:00
|
|
|
struct k_thread *th = (struct k_thread *)thread;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-29 16:54:23 +02:00
|
|
|
z_thread_priority_set(th, prio);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_thread_priority_set(k_tid_t thread, int prio)
|
2017-09-29 23:00:48 +02:00
|
|
|
{
|
2018-05-05 00:57:57 +02:00
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
2023-09-27 12:41:51 +02:00
|
|
|
Z_OOPS(K_SYSCALL_VERIFY_MSG(_is_valid_prio(prio, NULL),
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
"invalid thread priority %d", prio));
|
2023-09-27 12:41:51 +02:00
|
|
|
Z_OOPS(K_SYSCALL_VERIFY_MSG((int8_t)prio >= thread->base.prio,
|
2018-05-05 00:57:57 +02:00
|
|
|
"thread priority may only be downgraded (%d < %d)",
|
|
|
|
prio, thread->base.prio));
|
2017-10-08 19:11:24 +02:00
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
z_impl_k_thread_priority_set(thread, prio);
|
2017-09-29 23:00:48 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_thread_priority_set_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2018-05-15 20:06:25 +02:00
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread->base.prio_deadline = k_cycle_get_32() + deadline;
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
|
|
|
queue_thread(thread);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2019-08-13 20:34:34 +02:00
|
|
|
static inline void z_vrfy_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
2023-09-27 12:41:51 +02:00
|
|
|
Z_OOPS(K_SYSCALL_VERIFY_MSG(deadline > 0,
|
2018-05-15 20:06:25 +02:00
|
|
|
"invalid thread deadline %d",
|
|
|
|
(int)deadline));
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_impl_k_thread_deadline_set((k_tid_t)thread, deadline);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
2019-08-13 20:34:34 +02:00
|
|
|
#include <syscalls/k_thread_deadline_set_mrsh.c>
|
2018-05-15 20:06:25 +02:00
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
2022-03-26 00:55:23 +01:00
|
|
|
bool k_can_yield(void)
|
|
|
|
{
|
|
|
|
return !(k_is_pre_kernel() || k_is_in_isr() ||
|
|
|
|
z_is_idle_thread_object(_current));
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_yield(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, yield);
|
|
|
|
|
2021-05-14 00:46:43 +02:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
2021-03-01 19:14:13 +01:00
|
|
|
|
2021-05-14 00:46:43 +02:00
|
|
|
if (!IS_ENABLED(CONFIG_SMP) ||
|
|
|
|
z_is_thread_queued(_current)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2021-09-24 01:41:30 +02:00
|
|
|
queue_thread(_current);
|
2021-05-14 00:46:43 +02:00
|
|
|
update_cache(1);
|
|
|
|
z_swap(&sched_spinlock, key);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_yield(void)
|
|
|
|
{
|
|
|
|
z_impl_k_yield();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_yield_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2020-10-20 06:37:22 +02:00
|
|
|
static int32_t z_tick_sleep(k_ticks_t ticks)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2016-12-14 21:24:12 +01:00
|
|
|
#ifdef CONFIG_MULTITHREADING
|
2020-11-16 19:40:46 +01:00
|
|
|
uint32_t expected_wakeup_ticks;
|
2016-12-02 15:31:08 +01:00
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2022-11-23 13:42:04 +01:00
|
|
|
LOG_DBG("thread %p for %lu ticks", _current, (unsigned long)ticks);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2016-12-10 01:57:17 +01:00
|
|
|
/* wait of 0 ms is treated as a 'yield' */
|
2019-05-08 22:22:46 +02:00
|
|
|
if (ticks == 0) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
k_yield();
|
2018-10-25 17:45:08 +02:00
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-12-18 20:12:39 +01:00
|
|
|
k_timeout_t timeout = Z_TIMEOUT_TICKS(ticks);
|
2021-05-26 00:49:28 +02:00
|
|
|
if (Z_TICK_ABS(ticks) <= 0) {
|
|
|
|
expected_wakeup_ticks = ticks + sys_clock_tick_get_32();
|
|
|
|
} else {
|
|
|
|
expected_wakeup_ticks = Z_TICK_ABS(ticks);
|
|
|
|
}
|
2019-02-06 00:36:01 +01:00
|
|
|
|
2020-09-05 21:53:42 +02:00
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
|
|
|
#endif
|
2020-09-05 21:53:42 +02:00
|
|
|
unready_thread(_current);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
z_add_thread_timeout(_current, timeout);
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_suspended(_current);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-09-05 21:53:42 +02:00
|
|
|
(void)z_swap(&sched_spinlock, key);
|
2018-10-25 17:45:08 +02:00
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
__ASSERT(!z_is_thread_state_set(_current, _THREAD_SUSPENDED), "");
|
|
|
|
|
2021-03-13 14:19:53 +01:00
|
|
|
ticks = (k_ticks_t)expected_wakeup_ticks - sys_clock_tick_get_32();
|
2018-10-25 17:45:08 +02:00
|
|
|
if (ticks > 0) {
|
2019-05-08 22:22:46 +02:00
|
|
|
return ticks;
|
2018-10-25 17:45:08 +02:00
|
|
|
}
|
2016-12-14 21:24:12 +01:00
|
|
|
#endif
|
2018-10-25 17:45:08 +02:00
|
|
|
|
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_sleep(k_timeout_t timeout)
|
2019-05-08 22:22:46 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
k_ticks_t ticks;
|
2019-05-08 22:22:46 +02:00
|
|
|
|
2019-12-12 23:07:07 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_ENTER(k_thread, sleep, timeout);
|
2019-12-12 23:07:07 +01:00
|
|
|
|
2020-10-17 13:52:17 +02:00
|
|
|
/* in case of K_FOREVER, we suspend */
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
2019-11-08 19:44:22 +01:00
|
|
|
k_thread_suspend(_current);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, sleep, timeout, (int32_t) K_TICKS_FOREVER);
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
return (int32_t) K_TICKS_FOREVER;
|
2019-11-08 19:44:22 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
ticks = timeout.ticks;
|
|
|
|
|
2019-05-08 22:22:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
int32_t ret = k_ticks_to_ms_floor64(ticks);
|
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, sleep, timeout, ret);
|
|
|
|
|
|
|
|
return ret;
|
2019-05-08 22:22:46 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_sleep(k_timeout_t timeout)
|
2017-09-27 23:45:10 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
return z_impl_k_sleep(timeout);
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_sleep_mrsh.c>
|
2019-05-10 01:46:46 +02:00
|
|
|
#endif
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t ticks;
|
2019-05-10 01:46:46 +02:00
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC_ENTER(k_thread, usleep, us);
|
|
|
|
|
2019-10-03 20:43:10 +02:00
|
|
|
ticks = k_us_to_ticks_ceil64(us);
|
2019-05-10 01:46:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_FUNC_EXIT(k_thread, usleep, us, k_ticks_to_us_floor64(ticks));
|
|
|
|
|
2019-10-03 20:43:10 +02:00
|
|
|
return k_ticks_to_us_floor64(ticks);
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
|
|
|
return z_impl_k_usleep(us);
|
2017-09-27 23:45:10 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_usleep_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_wakeup(k_tid_t thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC(k_thread, wakeup, thread);
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_thread_pending(thread)) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_abort_thread_timeout(thread) < 0) {
|
2019-11-08 19:44:22 +01:00
|
|
|
/* Might have just been sleeping forever */
|
|
|
|
if (thread->base.thread_state != _THREAD_SUSPENDED) {
|
|
|
|
return;
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_not_suspended(thread);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_ready_thread(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2022-04-06 18:58:20 +02:00
|
|
|
flag_ipi();
|
2020-02-04 22:52:09 +01:00
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
if (!arch_is_in_isr()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
2019-02-20 01:03:39 +01:00
|
|
|
}
|
|
|
|
|
2020-05-28 05:29:50 +02:00
|
|
|
#ifdef CONFIG_TRACE_SCHED_IPI
|
|
|
|
extern void z_trace_sched_ipi(void);
|
|
|
|
#endif
|
|
|
|
|
2019-02-20 01:03:39 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
void z_sched_ipi(void)
|
|
|
|
{
|
sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.
Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.
This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-10 03:55:07 +01:00
|
|
|
/* NOTE: When adding code to this, make sure this is called
|
|
|
|
* at appropriate location when !CONFIG_SCHED_IPI_SUPPORTED.
|
|
|
|
*/
|
2020-05-28 05:29:50 +02:00
|
|
|
#ifdef CONFIG_TRACE_SCHED_IPI
|
|
|
|
z_trace_sched_ipi();
|
|
|
|
#endif
|
2023-03-07 17:29:31 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_TIMESLICING
|
2023-03-10 04:45:18 +01:00
|
|
|
if (sliceable(_current)) {
|
2023-03-07 17:29:31 +01:00
|
|
|
z_time_slice();
|
|
|
|
}
|
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
2019-02-20 01:03:39 +01:00
|
|
|
#endif
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_wakeup(k_tid_t thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_wakeup(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_wakeup_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2023-09-25 20:56:10 +02:00
|
|
|
k_tid_t z_impl_k_sched_current_thread_query(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2020-02-06 22:39:52 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/* In SMP, _current is a field read from _current_cpu, which
|
|
|
|
* can race with preemption before it is read. We must lock
|
|
|
|
* local interrupts when reading it.
|
|
|
|
*/
|
|
|
|
unsigned int k = arch_irq_lock();
|
|
|
|
#endif
|
|
|
|
|
|
|
|
k_tid_t ret = _current_cpu->current;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
arch_irq_unlock(k);
|
|
|
|
#endif
|
|
|
|
return ret;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
2023-09-25 20:56:10 +02:00
|
|
|
static inline k_tid_t z_vrfy_k_sched_current_thread_query(void)
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
{
|
2023-09-25 20:56:10 +02:00
|
|
|
return z_impl_k_sched_current_thread_query();
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
}
|
2023-09-25 20:56:10 +02:00
|
|
|
#include <syscalls/k_sched_current_thread_query_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_impl_k_is_preempt_thread(void)
|
2016-11-10 21:54:27 +01:00
|
|
|
{
|
2019-11-07 21:43:29 +01:00
|
|
|
return !arch_is_in_isr() && is_preempt(_current);
|
2016-11-10 21:54:27 +01:00
|
|
|
}
|
2017-09-29 23:00:48 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline int z_vrfy_k_is_preempt_thread(void)
|
|
|
|
{
|
|
|
|
return z_impl_k_is_preempt_thread();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_is_preempt_thread_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
2019-01-31 00:00:42 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK
|
|
|
|
# ifdef CONFIG_SMP
|
2023-09-19 23:20:36 +02:00
|
|
|
/* Right now we use a two byte for this mask */
|
|
|
|
BUILD_ASSERT(CONFIG_MP_MAX_NUM_CPUS <= 16, "Too many CPUs for mask word");
|
2019-01-31 00:00:42 +01:00
|
|
|
# endif
|
|
|
|
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
static int cpu_mask_mod(k_tid_t thread, uint32_t enable_mask, uint32_t disable_mask)
|
2019-01-31 00:00:42 +01:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
2022-05-02 23:31:04 +02:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK_PIN_ONLY
|
2022-09-09 11:15:41 +02:00
|
|
|
__ASSERT(z_is_thread_prevented_from_running(thread),
|
|
|
|
"Running threads cannot change CPU pin");
|
2022-05-02 23:31:04 +02:00
|
|
|
#endif
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_prevented_from_running(thread)) {
|
|
|
|
thread->base.cpu_mask |= enable_mask;
|
|
|
|
thread->base.cpu_mask &= ~disable_mask;
|
2019-01-31 00:00:42 +01:00
|
|
|
} else {
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
2021-09-24 19:57:39 +02:00
|
|
|
|
|
|
|
#if defined(CONFIG_ASSERT) && defined(CONFIG_SCHED_CPU_MASK_PIN_ONLY)
|
|
|
|
int m = thread->base.cpu_mask;
|
|
|
|
|
|
|
|
__ASSERT((m == 0) || ((m & (m - 1)) == 0),
|
|
|
|
"Only one CPU allowed in mask when PIN_ONLY");
|
|
|
|
#endif
|
|
|
|
|
2019-01-31 00:00:42 +01:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_clear(k_tid_t thread)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0, 0xffffffff);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_enable_all(k_tid_t thread)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0xffffffff, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_enable(k_tid_t thread, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, BIT(cpu), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_disable(k_tid_t thread, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0, BIT(cpu));
|
|
|
|
}
|
|
|
|
|
2022-04-15 14:27:15 +02:00
|
|
|
int k_thread_cpu_pin(k_tid_t thread, int cpu)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = k_thread_cpu_mask_clear(thread);
|
|
|
|
if (ret == 0) {
|
|
|
|
return k_thread_cpu_mask_enable(thread, cpu);
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-01-31 00:00:42 +01:00
|
|
|
#endif /* CONFIG_SCHED_CPU_MASK */
|
2020-02-21 01:33:06 +01:00
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
static inline void unpend_all(_wait_q_t *wait_q)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
|
|
|
|
while ((thread = z_waitq_head(wait_q)) != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
arch_thread_return_value_set(thread, 0);
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-06 07:59:40 +02:00
|
|
|
#ifdef CONFIG_CMSIS_RTOS_V1
|
|
|
|
extern void z_thread_cmsis_status_mask_clear(struct k_thread *thread);
|
|
|
|
#endif
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
static void end_thread(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
/* We hold the lock, and the thread is known not to be running
|
|
|
|
* anywhere.
|
|
|
|
*/
|
2021-03-29 16:03:49 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) == 0U) {
|
2021-02-20 00:32:19 +01:00
|
|
|
thread->base.thread_state |= _THREAD_DEAD;
|
|
|
|
thread->base.thread_state &= ~_THREAD_ABORTING;
|
|
|
|
if (z_is_thread_queued(thread)) {
|
2021-09-24 01:41:30 +02:00
|
|
|
dequeue_thread(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
}
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
unpend_all(&thread->join_queue);
|
|
|
|
update_cache(1);
|
|
|
|
|
2023-08-13 23:41:52 +02:00
|
|
|
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
|
|
|
|
arch_float_disable(thread);
|
|
|
|
#endif
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_FUNC(k_thread, sched_abort, thread);
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
z_thread_monitor_exit(thread);
|
|
|
|
|
2021-09-06 07:59:40 +02:00
|
|
|
#ifdef CONFIG_CMSIS_RTOS_V1
|
|
|
|
z_thread_cmsis_status_mask_clear(thread);
|
|
|
|
#endif
|
|
|
|
|
kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
sys_mem_blocks, k_mem_slab
_cpu, z_kernel
k_thread, k_timer
k_condvar, k_event, k_mutex, k_sem
k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-05-11 20:06:46 +02:00
|
|
|
#ifdef CONFIG_OBJ_CORE_THREAD
|
2023-06-01 18:16:40 +02:00
|
|
|
#ifdef CONFIG_OBJ_CORE_STATS_THREAD
|
|
|
|
k_obj_core_stats_deregister(K_OBJ_CORE(thread));
|
|
|
|
#endif
|
kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
sys_mem_blocks, k_mem_slab
_cpu, z_kernel
k_thread, k_timer
k_condvar, k_event, k_mutex, k_sem
k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-05-11 20:06:46 +02:00
|
|
|
k_obj_core_unlink(K_OBJ_CORE(thread));
|
|
|
|
#endif
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
z_mem_domain_exit_thread(thread);
|
|
|
|
z_thread_perms_all_clear(thread);
|
2023-09-27 12:45:18 +02:00
|
|
|
k_object_uninit(thread->stack_obj);
|
|
|
|
k_object_uninit(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
#endif
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void z_thread_abort(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
2022-05-19 21:55:28 +02:00
|
|
|
if ((thread->base.user_options & K_ESSENTIAL) != 0) {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
__ASSERT(false, "aborting essential thread %p", thread);
|
|
|
|
k_panic();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) != 0U) {
|
2021-02-20 00:32:19 +01:00
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
if (is_aborting(thread) && thread == _current && arch_is_in_isr()) {
|
|
|
|
/* Another CPU is spinning for us, don't deadlock */
|
|
|
|
end_thread(thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool active = thread_active_elsewhere(thread);
|
|
|
|
|
|
|
|
if (active) {
|
|
|
|
/* It's running somewhere else, flag and poke */
|
|
|
|
thread->base.thread_state |= _THREAD_ABORTING;
|
2021-03-09 23:41:43 +01:00
|
|
|
|
2022-04-06 19:10:17 +02:00
|
|
|
/* We're going to spin, so need a true synchronous IPI
|
|
|
|
* here, not deferred!
|
|
|
|
*/
|
2021-03-09 23:41:43 +01:00
|
|
|
#ifdef CONFIG_SCHED_IPI_SUPPORTED
|
2021-02-20 00:32:19 +01:00
|
|
|
arch_sched_ipi();
|
2021-03-09 23:41:43 +01:00
|
|
|
#endif
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if (is_aborting(thread) && thread != _current) {
|
|
|
|
if (arch_is_in_isr()) {
|
|
|
|
/* ISRs can only spin waiting another CPU */
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
while (is_aborting(thread)) {
|
|
|
|
}
|
2023-05-26 18:39:16 +02:00
|
|
|
|
|
|
|
/* Now we know it's dying, but not necessarily
|
|
|
|
* dead. Wait for the switch to happen!
|
|
|
|
*/
|
|
|
|
key = k_spin_lock(&sched_spinlock);
|
|
|
|
z_sched_switch_spin(thread);
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
2021-02-20 00:32:19 +01:00
|
|
|
} else if (active) {
|
|
|
|
/* Threads can join */
|
|
|
|
add_to_waitq_locked(_current, &thread->join_queue);
|
|
|
|
z_swap(&sched_spinlock, key);
|
|
|
|
}
|
|
|
|
return; /* lock has been released */
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
end_thread(thread);
|
|
|
|
if (thread == _current && !arch_is_in_isr()) {
|
|
|
|
z_swap(&sched_spinlock, key);
|
|
|
|
__ASSERT(false, "aborted _current back from dead");
|
|
|
|
}
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
}
|
|
|
|
|
|
|
|
#if !defined(CONFIG_ARCH_HAS_THREAD_ABORT)
|
|
|
|
void z_impl_k_thread_abort(struct k_thread *thread)
|
|
|
|
{
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, abort, thread);
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
z_thread_abort(thread);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, abort, thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
int z_impl_k_thread_join(struct k_thread *thread, k_timeout_t timeout)
|
|
|
|
{
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
int ret = 0;
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_thread, join, thread, timeout);
|
|
|
|
|
2021-03-29 23:13:47 +02:00
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) != 0U) {
|
2023-05-26 18:39:16 +02:00
|
|
|
z_sched_switch_spin(thread);
|
2021-02-20 00:32:19 +01:00
|
|
|
ret = 0;
|
|
|
|
} else if (K_TIMEOUT_EQ(timeout, K_NO_WAIT)) {
|
|
|
|
ret = -EBUSY;
|
2021-03-29 23:13:47 +02:00
|
|
|
} else if ((thread == _current) ||
|
|
|
|
(thread->base.pended_on == &_current->join_queue)) {
|
2021-02-20 00:32:19 +01:00
|
|
|
ret = -EDEADLK;
|
|
|
|
} else {
|
|
|
|
__ASSERT(!arch_is_in_isr(), "cannot join in ISR");
|
|
|
|
add_to_waitq_locked(_current, &thread->join_queue);
|
|
|
|
add_thread_timeout(_current, timeout);
|
2021-03-26 10:59:08 +01:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_BLOCKING(k_thread, join, thread, timeout);
|
|
|
|
ret = z_swap(&sched_spinlock, key);
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, join, thread, timeout, ret);
|
|
|
|
|
|
|
|
return ret;
|
2021-02-20 00:32:19 +01:00
|
|
|
}
|
|
|
|
|
2021-03-26 10:59:08 +01:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_thread, join, thread, timeout, ret);
|
|
|
|
|
2021-02-20 00:32:19 +01:00
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
/* Special case: don't oops if the thread is uninitialized. This is because
|
|
|
|
* the initialization bit does double-duty for thread objects; if false, means
|
|
|
|
* the thread object is truly uninitialized, or the thread ran and exited for
|
|
|
|
* some reason.
|
|
|
|
*
|
|
|
|
* Return true in this case indicating we should just do nothing and return
|
|
|
|
* success to the caller.
|
|
|
|
*/
|
|
|
|
static bool thread_obj_validate(struct k_thread *thread)
|
|
|
|
{
|
2023-09-26 23:37:25 +02:00
|
|
|
struct k_object *ko = z_object_find(thread);
|
2020-02-21 01:33:06 +01:00
|
|
|
int ret = z_object_validate(ko, K_OBJ_THREAD, _OBJ_INIT_TRUE);
|
|
|
|
|
|
|
|
switch (ret) {
|
|
|
|
case 0:
|
|
|
|
return false;
|
|
|
|
case -EINVAL:
|
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
#ifdef CONFIG_LOG
|
|
|
|
z_dump_object_error(ret, thread, ko, K_OBJ_THREAD);
|
|
|
|
#endif
|
2023-09-27 12:41:51 +02:00
|
|
|
Z_OOPS(K_SYSCALL_VERIFY_MSG(ret, "access denied"));
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
2021-01-15 10:09:58 +01:00
|
|
|
CODE_UNREACHABLE; /* LCOV_EXCL_LINE */
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static inline int z_vrfy_k_thread_join(struct k_thread *thread,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return z_impl_k_thread_join(thread, timeout);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_join_mrsh.c>
|
2020-03-25 00:09:24 +01:00
|
|
|
|
|
|
|
static inline void z_vrfy_k_thread_abort(k_tid_t thread)
|
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2023-09-27 12:41:51 +02:00
|
|
|
Z_OOPS(K_SYSCALL_VERIFY_MSG(!(thread->base.user_options & K_ESSENTIAL),
|
2020-03-25 00:09:24 +01:00
|
|
|
"aborting essential thread %p", thread));
|
|
|
|
|
|
|
|
z_impl_k_thread_abort((struct k_thread *)thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_abort_mrsh.c>
|
2020-02-21 01:33:06 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|
2021-01-12 20:45:32 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* future scheduler.h API implementations
|
|
|
|
*/
|
|
|
|
bool z_sched_wake(_wait_q_t *wait_q, int swap_retval, void *swap_data)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
bool ret = false;
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2021-01-12 20:45:32 +01:00
|
|
|
thread = _priq_wait_best(&wait_q->waitq);
|
|
|
|
|
|
|
|
if (thread != NULL) {
|
|
|
|
z_thread_return_value_set_with_data(thread,
|
|
|
|
swap_retval,
|
|
|
|
swap_data);
|
|
|
|
unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
ret = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int z_sched_wait(struct k_spinlock *lock, k_spinlock_key_t key,
|
|
|
|
_wait_q_t *wait_q, k_timeout_t timeout, void **data)
|
|
|
|
{
|
|
|
|
int ret = z_pend_curr(lock, key, wait_q, timeout);
|
|
|
|
|
|
|
|
if (data != NULL) {
|
|
|
|
*data = _current->base.swap_data;
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
2023-01-05 17:50:21 +01:00
|
|
|
|
|
|
|
int z_sched_waitq_walk(_wait_q_t *wait_q,
|
|
|
|
int (*func)(struct k_thread *, void *), void *data)
|
|
|
|
{
|
|
|
|
struct k_thread *thread;
|
|
|
|
int status = 0;
|
|
|
|
|
2023-07-07 09:12:38 +02:00
|
|
|
K_SPINLOCK(&sched_spinlock) {
|
2023-01-05 17:50:21 +01:00
|
|
|
_WAIT_Q_FOR_EACH(wait_q, thread) {
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Invoke the callback function on each waiting thread
|
|
|
|
* for as long as there are both waiting threads AND
|
|
|
|
* it returns 0.
|
|
|
|
*/
|
|
|
|
|
|
|
|
status = func(thread, data);
|
|
|
|
if (status != 0) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|