unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
/*
|
2018-05-03 23:51:49 +02:00
|
|
|
* Copyright (c) 2018 Intel Corporation
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*
|
2017-01-19 02:01:01 +01:00
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
*/
|
|
|
|
#include <kernel.h>
|
2016-10-13 16:31:48 +02:00
|
|
|
#include <ksched.h>
|
2018-05-03 23:51:49 +02:00
|
|
|
#include <spinlock.h>
|
|
|
|
#include <sched_priq.h>
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
#include <wait_q.h>
|
2018-01-26 00:24:15 +01:00
|
|
|
#include <kswap.h>
|
2018-05-03 23:51:49 +02:00
|
|
|
#include <kernel_arch_func.h>
|
|
|
|
#include <syscall_handler.h>
|
2019-06-21 18:55:37 +02:00
|
|
|
#include <drivers/timer/system_timer.h>
|
2018-11-22 01:22:15 +01:00
|
|
|
#include <stdbool.h>
|
2019-09-22 02:54:37 +02:00
|
|
|
#include <kernel_internal.h>
|
2019-12-02 16:24:08 +01:00
|
|
|
#include <logging/log.h>
|
|
|
|
LOG_MODULE_DECLARE(os);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-01-14 15:26:10 +01:00
|
|
|
/* Maximum time between the time a self-aborting thread flags itself
|
|
|
|
* DEAD and the last read or write to its stack memory (i.e. the time
|
|
|
|
* of its next swap()). In theory this might be tuned per platform,
|
|
|
|
* but in practice this conservative value should be safe.
|
|
|
|
*/
|
|
|
|
#define THREAD_ABORT_DELAY_US 500
|
|
|
|
|
2018-06-27 20:20:50 +02:00
|
|
|
#if defined(CONFIG_SCHED_DUMB)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_dumb_add
|
|
|
|
#define _priq_run_remove z_priq_dumb_remove
|
2019-01-31 00:00:42 +01:00
|
|
|
# if defined(CONFIG_SCHED_CPU_MASK)
|
|
|
|
# define _priq_run_best _priq_dumb_mask_best
|
|
|
|
# else
|
2019-03-08 22:19:05 +01:00
|
|
|
# define _priq_run_best z_priq_dumb_best
|
2019-01-31 00:00:42 +01:00
|
|
|
# endif
|
2018-06-27 20:20:50 +02:00
|
|
|
#elif defined(CONFIG_SCHED_SCALABLE)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_rb_add
|
|
|
|
#define _priq_run_remove z_priq_rb_remove
|
|
|
|
#define _priq_run_best z_priq_rb_best
|
2018-06-28 19:38:14 +02:00
|
|
|
#elif defined(CONFIG_SCHED_MULTIQ)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define _priq_run_add z_priq_mq_add
|
|
|
|
#define _priq_run_remove z_priq_mq_remove
|
|
|
|
#define _priq_run_best z_priq_mq_best
|
2018-05-03 23:51:49 +02:00
|
|
|
#endif
|
2016-11-08 16:36:50 +01:00
|
|
|
|
2018-06-27 20:20:50 +02:00
|
|
|
#if defined(CONFIG_WAITQ_SCALABLE)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define z_priq_wait_add z_priq_rb_add
|
|
|
|
#define _priq_wait_remove z_priq_rb_remove
|
|
|
|
#define _priq_wait_best z_priq_rb_best
|
2018-06-27 20:20:50 +02:00
|
|
|
#elif defined(CONFIG_WAITQ_DUMB)
|
2019-03-08 22:19:05 +01:00
|
|
|
#define z_priq_wait_add z_priq_dumb_add
|
|
|
|
#define _priq_wait_remove z_priq_dumb_remove
|
|
|
|
#define _priq_wait_best z_priq_dumb_best
|
2018-04-11 23:52:47 +02:00
|
|
|
#endif
|
|
|
|
|
2018-11-02 01:50:02 +01:00
|
|
|
/* the only struct z_kernel instance */
|
|
|
|
struct z_kernel _kernel;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
static struct k_spinlock sched_spinlock;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
#define LOCKED(lck) for (k_spinlock_key_t __i = {}, \
|
|
|
|
__key = k_spin_lock(lck); \
|
|
|
|
!__i.key; \
|
|
|
|
k_spin_unlock(lck, __key), __i.key = 1)
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
static inline int is_preempt(struct k_thread *thread)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
|
|
|
#ifdef CONFIG_PREEMPT_ENABLED
|
|
|
|
/* explanation in kernel_struct.h */
|
|
|
|
return thread->base.preempt <= _PREEMPT_THRESHOLD;
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-05-11 23:02:42 +02:00
|
|
|
static inline int is_metairq(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
#if CONFIG_NUM_METAIRQ_PRIORITIES > 0
|
|
|
|
return (thread->base.prio - K_HIGHEST_THREAD_PRIO)
|
|
|
|
< CONFIG_NUM_METAIRQ_PRIORITIES;
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-06-23 15:20:34 +02:00
|
|
|
#if CONFIG_ASSERT
|
2019-03-14 22:32:45 +01:00
|
|
|
static inline bool is_thread_dummy(struct k_thread *thread)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
2019-03-28 21:57:54 +01:00
|
|
|
return (thread->base.thread_state & _THREAD_DUMMY) != 0U;
|
2018-04-11 23:52:47 +02:00
|
|
|
}
|
2018-06-23 15:20:34 +02:00
|
|
|
#endif
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
bool z_is_t1_higher_prio_than_t2(struct k_thread *thread_1,
|
|
|
|
struct k_thread *thread_2)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread_1->base.prio < thread_2->base.prio) {
|
2018-09-21 00:43:57 +02:00
|
|
|
return true;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
|
|
|
/* Note that we don't care about wraparound conditions. The
|
|
|
|
* expectation is that the application will have arranged to
|
|
|
|
* block the threads, change their priorities or reset their
|
|
|
|
* deadlines when the job is complete. Letting the deadlines
|
|
|
|
* go negative is fine and in fact prevents aliasing bugs.
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread_1->base.prio == thread_2->base.prio) {
|
2018-05-15 20:06:25 +02:00
|
|
|
int now = (int) k_cycle_get_32();
|
2019-12-19 14:19:45 +01:00
|
|
|
int dt1 = thread_1->base.prio_deadline - now;
|
|
|
|
int dt2 = thread_2->base.prio_deadline - now;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
|
|
|
return dt1 < dt2;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2018-09-21 00:43:57 +02:00
|
|
|
return false;
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
static ALWAYS_INLINE bool should_preempt(struct k_thread *thread,
|
|
|
|
int preempt_ok)
|
2018-05-30 20:23:02 +02:00
|
|
|
{
|
2018-05-31 20:13:49 +02:00
|
|
|
/* Preemption is OK if it's being explicitly allowed by
|
|
|
|
* software state (e.g. the thread called k_yield())
|
2018-05-30 20:23:02 +02:00
|
|
|
*/
|
2018-11-22 01:22:15 +01:00
|
|
|
if (preempt_ok != 0) {
|
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-01-28 19:59:41 +01:00
|
|
|
__ASSERT(_current != NULL, "");
|
|
|
|
|
2018-05-31 20:13:49 +02:00
|
|
|
/* Or if we're pended/suspended/dummy (duh) */
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_thread_prevented_from_running(_current)) {
|
2019-01-04 21:52:17 +01:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Edge case on ARM where a thread can be pended out of an
|
|
|
|
* interrupt handler before the "synchronous" swap starts
|
|
|
|
* context switching. Platforms with atomic swap can never
|
|
|
|
* hit this.
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_SWAP_NONATOMIC)
|
2019-12-19 14:19:45 +01:00
|
|
|
&& z_is_thread_timeout_active(thread)) {
|
2018-11-22 01:22:15 +01:00
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Otherwise we have to be running a preemptible thread or
|
|
|
|
* switching to a metairq
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
if (is_preempt(_current) || is_metairq(thread)) {
|
2018-11-22 01:22:15 +01:00
|
|
|
return true;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2018-05-31 20:13:49 +02:00
|
|
|
/* The idle threads can look "cooperative" if there are no
|
|
|
|
* preemptible priorities (this is sort of an API glitch).
|
|
|
|
* They must always be preemptible.
|
|
|
|
*/
|
2019-09-22 03:36:23 +02:00
|
|
|
if (!IS_ENABLED(CONFIG_PREEMPT_ENABLED) &&
|
|
|
|
z_is_idle_thread_object(_current)) {
|
2018-11-22 01:22:15 +01:00
|
|
|
return true;
|
2018-05-31 20:13:49 +02:00
|
|
|
}
|
|
|
|
|
2018-11-22 01:22:15 +01:00
|
|
|
return false;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-01-31 00:00:42 +01:00
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK
|
|
|
|
static ALWAYS_INLINE struct k_thread *_priq_dumb_mask_best(sys_dlist_t *pq)
|
|
|
|
{
|
|
|
|
/* With masks enabled we need to be prepared to walk the list
|
|
|
|
* looking for one we can run
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread;
|
2019-01-31 00:00:42 +01:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
SYS_DLIST_FOR_EACH_CONTAINER(pq, thread, base.qnode_dlist) {
|
|
|
|
if ((thread->base.cpu_mask & BIT(_current_cpu->id)) != 0) {
|
|
|
|
return thread;
|
2019-01-31 00:00:42 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-01-28 18:36:36 +01:00
|
|
|
static ALWAYS_INLINE struct k_thread *next_up(void)
|
2018-04-11 23:52:47 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = _priq_run_best(&_kernel.ready_q.runq);
|
2019-11-13 18:41:52 +01:00
|
|
|
|
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && (CONFIG_NUM_COOP_PRIORITIES > 0)
|
|
|
|
/* MetaIRQs must always attempt to return back to a
|
|
|
|
* cooperative thread they preempted and not whatever happens
|
|
|
|
* to be highest priority now. The cooperative thread was
|
|
|
|
* promised it wouldn't be preempted (by non-metairq threads)!
|
|
|
|
*/
|
|
|
|
struct k_thread *mirqp = _current_cpu->metairq_preempted;
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (mirqp != NULL && (thread == NULL || !is_metairq(thread))) {
|
2019-11-13 18:41:52 +01:00
|
|
|
if (!z_is_thread_prevented_from_running(mirqp)) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = mirqp;
|
2019-11-13 18:41:52 +01:00
|
|
|
} else {
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.
Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.
This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-10 03:55:07 +01:00
|
|
|
/* If the current thread is marked aborting, mark it
|
|
|
|
* dead so it will not be scheduled again.
|
|
|
|
*/
|
|
|
|
if (_current->base.thread_state & _THREAD_ABORTING) {
|
|
|
|
_current->base.thread_state |= _THREAD_DEAD;
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
_current_cpu->swap_ok = true;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
|
|
|
/* In uniprocessor mode, we can leave the current thread in
|
|
|
|
* the queue (actually we have to, otherwise the assembly
|
|
|
|
* context switch code for all architectures would be
|
2019-03-08 22:19:05 +01:00
|
|
|
* responsible for putting it back in z_swap and ISR return!),
|
2018-05-03 23:51:49 +02:00
|
|
|
* which makes this choice simple.
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread ? thread : _current_cpu->idle_thread;
|
2018-04-11 23:52:47 +02:00
|
|
|
#else
|
2018-05-03 23:51:49 +02:00
|
|
|
/* Under SMP, the "cache" mechanism for selecting the next
|
|
|
|
* thread doesn't work, so we have more work to do to test
|
2019-11-13 18:41:52 +01:00
|
|
|
* _current against the best choice from the queue. Here, the
|
|
|
|
* thread selected above represents "the best thread that is
|
|
|
|
* not current".
|
2018-05-30 20:23:02 +02:00
|
|
|
*
|
|
|
|
* Subtle note on "queued": in SMP mode, _current does not
|
|
|
|
* live in the queue, so this isn't exactly the same thing as
|
|
|
|
* "ready", it means "is _current already added back to the
|
|
|
|
* queue such that we don't want to re-add it".
|
2018-05-03 23:51:49 +02:00
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
int queued = z_is_thread_queued(_current);
|
|
|
|
int active = !z_is_thread_prevented_from_running(_current);
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread == NULL) {
|
|
|
|
thread = _current_cpu->idle_thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
if (active) {
|
|
|
|
if (!queued &&
|
2019-12-19 14:19:45 +01:00
|
|
|
!z_is_t1_higher_prio_than_t2(thread, _current)) {
|
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (!should_preempt(thread, _current_cpu->swap_ok)) {
|
|
|
|
thread = _current;
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
}
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Put _current back into the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current && active &&
|
|
|
|
!z_is_idle_thread_object(_current) && !queued) {
|
2018-05-03 23:51:49 +02:00
|
|
|
_priq_run_add(&_kernel.ready_q.runq, _current);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_mark_thread_as_queued(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
/* Take the new _current out of the queue */
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
z_mark_thread_as_not_queued(thread);
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-04-11 23:52:47 +02:00
|
|
|
#endif
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-04-11 23:52:47 +02:00
|
|
|
|
2018-09-25 19:56:09 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
|
|
|
|
static int slice_time;
|
|
|
|
static int slice_max_prio;
|
|
|
|
|
2019-01-04 21:54:23 +01:00
|
|
|
#ifdef CONFIG_SWAP_NONATOMIC
|
2019-03-08 22:19:05 +01:00
|
|
|
/* If z_swap() isn't atomic, then it's possible for a timer interrupt
|
2019-01-04 21:54:23 +01:00
|
|
|
* to try to timeslice away _current after it has already pended
|
|
|
|
* itself but before the corresponding context switch. Treat that as
|
|
|
|
* a noop condition in z_time_slice().
|
|
|
|
*/
|
|
|
|
static struct k_thread *pending_current;
|
|
|
|
#endif
|
|
|
|
|
2019-08-17 06:29:26 +02:00
|
|
|
void z_reset_time_slice(void)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2018-10-04 18:26:11 +02:00
|
|
|
/* Add the elapsed time since the last announced tick to the
|
|
|
|
* slice count, as we'll see those "expired" ticks arrive in a
|
|
|
|
* FUTURE z_time_slice() call.
|
|
|
|
*/
|
2019-06-16 04:32:04 +02:00
|
|
|
if (slice_time != 0) {
|
|
|
|
_current_cpu->slice_ticks = slice_time + z_clock_elapsed();
|
|
|
|
z_set_timeout_expiry(slice_time, false);
|
|
|
|
}
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
void k_sched_time_slice_set(int32_t slice, int prio)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2018-10-15 20:10:49 +02:00
|
|
|
_current_cpu->slice_ticks = 0;
|
2019-10-03 20:43:10 +02:00
|
|
|
slice_time = k_ms_to_ticks_ceil32(slice);
|
2018-10-15 20:10:49 +02:00
|
|
|
slice_max_prio = prio;
|
2019-08-17 06:29:26 +02:00
|
|
|
z_reset_time_slice();
|
2018-10-15 20:10:49 +02:00
|
|
|
}
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
static inline int sliceable(struct k_thread *thread)
|
2018-09-25 19:56:09 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
return is_preempt(thread)
|
|
|
|
&& !z_is_prio_higher(thread->base.prio, slice_max_prio)
|
|
|
|
&& !z_is_idle_thread_object(thread)
|
|
|
|
&& !z_is_thread_timeout_active(thread);
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Called out of each timer interrupt */
|
|
|
|
void z_time_slice(int ticks)
|
|
|
|
{
|
2019-01-04 21:54:23 +01:00
|
|
|
#ifdef CONFIG_SWAP_NONATOMIC
|
|
|
|
if (pending_current == _current) {
|
2019-08-17 06:29:26 +02:00
|
|
|
z_reset_time_slice();
|
2019-01-04 21:54:23 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
pending_current = NULL;
|
|
|
|
#endif
|
|
|
|
|
2018-09-25 19:56:09 +02:00
|
|
|
if (slice_time && sliceable(_current)) {
|
|
|
|
if (ticks >= _current_cpu->slice_ticks) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_move_thread_to_end_of_prio_q(_current);
|
2019-08-17 06:29:26 +02:00
|
|
|
z_reset_time_slice();
|
2018-09-25 19:56:09 +02:00
|
|
|
} else {
|
|
|
|
_current_cpu->slice_ticks -= ticks;
|
|
|
|
}
|
2019-07-24 11:17:33 +02:00
|
|
|
} else {
|
|
|
|
_current_cpu->slice_ticks = 0;
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Track cooperative threads preempted by metairqs so we can return to
|
|
|
|
* them specifically. Called at the moment a new thread has been
|
|
|
|
* selected to run.
|
|
|
|
*/
|
2019-12-19 14:19:45 +01:00
|
|
|
static void update_metairq_preempt(struct k_thread *thread)
|
2019-11-13 18:41:52 +01:00
|
|
|
{
|
|
|
|
#if (CONFIG_NUM_METAIRQ_PRIORITIES > 0) && (CONFIG_NUM_COOP_PRIORITIES > 0)
|
2019-12-19 14:19:45 +01:00
|
|
|
if (is_metairq(thread) && !is_metairq(_current) &&
|
|
|
|
!is_preempt(_current)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Record new preemption */
|
|
|
|
_current_cpu->metairq_preempted = _current;
|
2019-12-19 14:19:45 +01:00
|
|
|
} else if (!is_metairq(thread)) {
|
2019-11-13 18:41:52 +01:00
|
|
|
/* Returning from existing preemption */
|
|
|
|
_current_cpu->metairq_preempted = NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
static void update_cache(int preempt_ok)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifndef CONFIG_SMP
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = next_up();
|
2018-05-21 20:48:35 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (should_preempt(thread, preempt_ok)) {
|
2019-08-17 06:29:26 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != _current) {
|
2019-08-17 06:29:26 +02:00
|
|
|
z_reset_time_slice();
|
2018-09-25 19:56:09 +02:00
|
|
|
}
|
2019-08-17 06:29:26 +02:00
|
|
|
#endif
|
2019-12-19 14:19:45 +01:00
|
|
|
update_metairq_preempt(thread);
|
|
|
|
_kernel.ready_q.cache = thread;
|
2018-05-30 20:23:02 +02:00
|
|
|
} else {
|
|
|
|
_kernel.ready_q.cache = _current;
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
|
|
|
#else
|
|
|
|
/* The way this works is that the CPU record keeps its
|
|
|
|
* "cooperative swapping is OK" flag until the next reschedule
|
|
|
|
* call or context switch. It doesn't need to be tracked per
|
|
|
|
* thread because if the thread gets preempted for whatever
|
|
|
|
* reason the scheduler will make the same decision anyway.
|
|
|
|
*/
|
|
|
|
_current_cpu->swap_ok = preempt_ok;
|
2018-05-03 23:51:49 +02:00
|
|
|
#endif
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
static void ready_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2020-01-23 22:28:30 +01:00
|
|
|
if (z_is_thread_ready(thread)) {
|
|
|
|
sys_trace_thread_ready(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
_priq_run_add(&_kernel.ready_q.runq, thread);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_mark_thread_as_queued(thread);
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(0);
|
2019-08-27 17:53:27 +02:00
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
2019-11-07 21:43:29 +01:00
|
|
|
arch_sched_ipi();
|
2019-08-19 23:29:21 +02:00
|
|
|
#endif
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_ready_thread(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
LOCKED(&sched_spinlock) {
|
|
|
|
ready_thread(thread);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_move_thread_to_end_of_prio_q(struct k_thread *thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-08-17 06:29:26 +02:00
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
_priq_run_add(&_kernel.ready_q.runq, thread);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_mark_thread_as_queued(thread);
|
2018-09-25 19:56:09 +02:00
|
|
|
update_cache(thread == _current);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-01-23 22:28:30 +01:00
|
|
|
void z_sched_start(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
|
|
|
if (z_has_thread_started(thread)) {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
z_mark_thread_as_started(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
z_reschedule(&sched_spinlock, key);
|
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
void z_impl_k_thread_suspend(struct k_thread *thread)
|
2020-01-07 18:58:46 +01:00
|
|
|
{
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
|
|
|
|
LOCKED(&sched_spinlock) {
|
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
z_mark_thread_as_not_queued(thread);
|
|
|
|
}
|
|
|
|
z_mark_thread_as_suspended(thread);
|
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (thread == _current) {
|
|
|
|
z_reschedule_unlocked();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-02-14 19:52:49 +01:00
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_suspend(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_thread_suspend(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_suspend_mrsh.c>
|
|
|
|
#endif
|
|
|
|
|
|
|
|
void z_impl_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
ready_thread(thread);
|
|
|
|
|
|
|
|
z_reschedule(&sched_spinlock, key);
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
static inline void z_vrfy_k_thread_resume(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_thread_resume(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_resume_mrsh.c>
|
|
|
|
#endif
|
|
|
|
|
2020-01-07 18:58:46 +01:00
|
|
|
static _wait_q_t *pended_on(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
__ASSERT_NO_MSG(thread->base.pended_on);
|
|
|
|
|
|
|
|
return thread->base.pended_on;
|
|
|
|
}
|
|
|
|
|
|
|
|
void z_thread_single_abort(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (thread->fn_abort != NULL) {
|
|
|
|
thread->fn_abort();
|
|
|
|
}
|
|
|
|
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_SMP)) {
|
|
|
|
z_sched_abort(thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
LOCKED(&sched_spinlock) {
|
2020-02-21 01:33:06 +01:00
|
|
|
struct k_thread *waiter;
|
|
|
|
|
2020-01-07 18:58:46 +01:00
|
|
|
if (z_is_thread_ready(thread)) {
|
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq,
|
|
|
|
thread);
|
|
|
|
z_mark_thread_as_not_queued(thread);
|
|
|
|
}
|
|
|
|
update_cache(thread == _current);
|
|
|
|
} else {
|
|
|
|
if (z_is_thread_pending(thread)) {
|
|
|
|
_priq_wait_remove(&pended_on(thread)->waitq,
|
|
|
|
thread);
|
|
|
|
z_mark_thread_as_not_pending(thread);
|
|
|
|
thread->base.pended_on = NULL;
|
|
|
|
}
|
|
|
|
}
|
2020-01-14 15:26:10 +01:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
uint32_t mask = _THREAD_DEAD;
|
2020-01-14 15:26:10 +01:00
|
|
|
|
|
|
|
/* If the abort is happening in interrupt context,
|
|
|
|
* that means that execution will never return to the
|
|
|
|
* thread's stack and that the abort is known to be
|
|
|
|
* complete. Otherwise the thread still runs a bit
|
|
|
|
* until it can swap, requiring a delay.
|
|
|
|
*/
|
|
|
|
if (IS_ENABLED(CONFIG_SMP) && arch_is_in_isr()) {
|
|
|
|
mask |= _THREAD_ABORTED_IN_ISR;
|
|
|
|
}
|
|
|
|
|
|
|
|
thread->base.thread_state |= mask;
|
2020-01-07 18:58:46 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-02-21 01:33:06 +01:00
|
|
|
/* Clear initialized state so that this thread object may be
|
|
|
|
* re-used and triggers errors if API calls are made on it from
|
|
|
|
* user threads
|
|
|
|
*
|
|
|
|
* For a whole host of reasons this is not ideal and should be
|
|
|
|
* iterated on.
|
|
|
|
*/
|
|
|
|
z_object_uninit(thread->stack_obj);
|
|
|
|
z_object_uninit(thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
/* Revoke permissions on thread's ID so that it may be
|
|
|
|
* recycled
|
|
|
|
*/
|
|
|
|
z_thread_perms_all_clear(thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
#endif
|
2020-02-21 01:33:06 +01:00
|
|
|
|
|
|
|
/* Wake everybody up who was trying to join with this thread.
|
|
|
|
* A reschedule is invoked later by k_thread_abort().
|
|
|
|
*/
|
|
|
|
while ((waiter = z_waitq_head(&thread->base.join_waiters)) !=
|
|
|
|
NULL) {
|
2020-05-04 23:36:49 +02:00
|
|
|
(void)z_abort_thread_timeout(waiter);
|
2020-02-21 01:33:06 +01:00
|
|
|
_priq_wait_remove(&pended_on(waiter)->waitq, waiter);
|
|
|
|
z_mark_thread_as_not_pending(waiter);
|
|
|
|
waiter->base.pended_on = NULL;
|
|
|
|
arch_thread_return_value_set(waiter, 0);
|
|
|
|
ready_thread(waiter);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sys_trace_thread_abort(thread);
|
2020-01-07 18:58:46 +01:00
|
|
|
}
|
|
|
|
|
2020-01-23 22:04:15 +01:00
|
|
|
static void unready_thread(struct k_thread *thread)
|
|
|
|
{
|
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
z_mark_thread_as_not_queued(thread);
|
|
|
|
}
|
|
|
|
update_cache(thread == _current);
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_remove_thread_from_ready_q(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2020-01-23 22:04:15 +01:00
|
|
|
unready_thread(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
/* sched_spinlock must be held */
|
|
|
|
static void add_to_waitq_locked(struct k_thread *thread, _wait_q_t *wait_q)
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
{
|
2020-02-21 01:33:06 +01:00
|
|
|
unready_thread(thread);
|
|
|
|
z_mark_thread_as_pending(thread);
|
|
|
|
sys_trace_thread_pend(thread);
|
kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-02 16:37:27 +01:00
|
|
|
|
2020-02-21 01:33:06 +01:00
|
|
|
if (wait_q != NULL) {
|
|
|
|
thread->base.pended_on = wait_q;
|
|
|
|
z_priq_wait_add(&wait_q->waitq, thread);
|
2018-09-26 22:19:31 +02:00
|
|
|
}
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
2018-09-26 22:19:31 +02:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (!K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
|
|
|
#ifdef CONFIG_LEGACY_TIMEOUT_API
|
|
|
|
timeout = _TICK_ALIGN + k_ms_to_ticks_ceil32(timeout);
|
|
|
|
#endif
|
|
|
|
z_add_thread_timeout(thread, timeout);
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static void pend(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
|
|
|
LOCKED(&sched_spinlock) {
|
|
|
|
add_to_waitq_locked(thread, wait_q);
|
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
add_thread_timeout(thread, timeout);
|
2020-02-21 01:33:06 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
|
|
|
|
k_timeout_t timeout)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
|
2018-05-03 23:51:49 +02:00
|
|
|
pend(thread, wait_q, timeout);
|
|
|
|
}
|
2018-03-09 21:17:45 +01:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE struct k_thread *z_find_first_thread_to_unpend(_wait_q_t *wait_q,
|
2019-01-28 18:36:36 +01:00
|
|
|
struct k_thread *from)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
|
|
|
ARG_UNUSED(from);
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
struct k_thread *ret = NULL;
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2018-05-03 23:51:49 +02:00
|
|
|
ret = _priq_wait_best(&wait_q->waitq);
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_unpend_thread_no_timeout(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2018-05-03 23:51:49 +02:00
|
|
|
_priq_wait_remove(&pended_on(thread)->waitq, thread);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_mark_thread_as_not_pending(thread);
|
2020-01-23 22:04:15 +01:00
|
|
|
thread->base.pended_on = NULL;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2018-09-28 01:50:00 +02:00
|
|
|
#ifdef CONFIG_SYS_CLOCK_EXISTS
|
|
|
|
/* Timeout handler for *_thread_timeout() APIs */
|
2019-12-19 14:19:45 +01:00
|
|
|
void z_thread_timeout(struct _timeout *timeout)
|
2018-09-28 01:50:00 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = CONTAINER_OF(timeout,
|
|
|
|
struct k_thread, base.timeout);
|
2018-09-28 01:50:00 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread->base.pended_on != NULL) {
|
|
|
|
z_unpend_thread_no_timeout(thread);
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
z_mark_thread_as_started(thread);
|
|
|
|
z_mark_thread_as_not_suspended(thread);
|
|
|
|
z_ready_thread(thread);
|
2018-09-28 01:50:00 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int z_pend_curr_irqlock(uint32_t key, _wait_q_t *wait_q, k_timeout_t timeout)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-03-14 21:50:16 +01:00
|
|
|
pend(_current, wait_q, timeout);
|
|
|
|
|
2019-01-04 21:54:23 +01:00
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
2019-03-14 21:50:16 +01:00
|
|
|
|
|
|
|
int ret = z_swap_irqlock(key);
|
|
|
|
LOCKED(&sched_spinlock) {
|
|
|
|
if (pending_current == _current) {
|
|
|
|
pending_current = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
#else
|
2019-03-08 22:19:05 +01:00
|
|
|
return z_swap_irqlock(key);
|
2019-03-14 21:50:16 +01:00
|
|
|
#endif
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
_wait_q_t *wait_q, k_timeout_t timeout)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
|
|
|
#endif
|
|
|
|
pend(_current, wait_q, timeout);
|
2019-03-08 22:19:05 +01:00
|
|
|
return z_swap(lock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_unpend_first_thread(_wait_q_t *wait_q)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = z_unpend1_no_timeout(wait_q);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (thread != NULL) {
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
unified: cache the next thread to run
When adding a thread to the ready queue, it is often known at that time
if the thread added will be the next one to run or not. So, instead of
simply updating the ready queues and the bitmask, also cache what that
thread is, so that when the scheduler is invoked, it can simply fetch it
from there. This is only done if there is a thread in the cache, since
the way the cache is updated is by comparing the priorities of the
thread being added and the cached thread.
When a thread is removed from the ready queue, if it is currently the
cached thread, it is also removed from the cache. The cache is not
updated at this time, since this would be a preemptive fetching that
could be overriden before the newly cached thread would even be
scheduled in.
Finally, when a thread is scheduled in, it now becomes the cached thread
since the fact that it is running means that by definition it was the
next one to run.
Doing this can speed up considerably some context switch times,
especially when a thread is preempted by an interrupt and the same
thread is scheduled when the interrupt exits.
Change-Id: I6dc8391cfca566699bb9b217eafe6bc6a063c8bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-30 19:44:58 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_unpend_thread(struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
z_unpend_thread_no_timeout(thread);
|
|
|
|
(void)z_abort_thread_timeout(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
/* Priority set utility that does no rescheduling, it just changes the
|
|
|
|
* run queue state, returning true if a reschedule is needed later.
|
|
|
|
*/
|
|
|
|
bool z_set_prio(struct k_thread *thread, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-09-21 00:43:57 +02:00
|
|
|
bool need_sched = 0;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-03-08 22:19:05 +01:00
|
|
|
need_sched = z_is_thread_ready(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
if (need_sched) {
|
2019-07-01 19:25:55 +02:00
|
|
|
/* Don't requeue on SMP if it's the running thread */
|
|
|
|
if (!IS_ENABLED(CONFIG_SMP) || z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
thread->base.prio = prio;
|
|
|
|
_priq_run_add(&_kernel.ready_q.runq, thread);
|
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(1);
|
2018-05-03 23:51:49 +02:00
|
|
|
} else {
|
|
|
|
thread->base.prio = prio;
|
|
|
|
}
|
|
|
|
}
|
2018-07-04 15:03:03 +02:00
|
|
|
sys_trace_thread_priority_set(thread);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-08-20 20:21:28 +02:00
|
|
|
return need_sched;
|
|
|
|
}
|
|
|
|
|
|
|
|
void z_thread_priority_set(struct k_thread *thread, int prio)
|
|
|
|
{
|
|
|
|
bool need_sched = z_set_prio(thread, prio);
|
|
|
|
|
2020-02-04 22:52:09 +01:00
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
|
|
|
arch_sched_ipi();
|
|
|
|
#endif
|
|
|
|
|
2019-02-07 02:27:14 +01:00
|
|
|
if (need_sched && _current->base.sched_locked == 0) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int resched(uint32_t key)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
_current_cpu->swap_ok = 0;
|
|
|
|
#endif
|
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
return arch_irq_unlocked(key) && !arch_is_in_isr();
|
2018-07-24 22:37:59 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
|
2020-08-10 21:47:02 +02:00
|
|
|
/*
|
|
|
|
* Check if the next ready thread is the same as the current thread
|
|
|
|
* and save the trip if true.
|
|
|
|
*/
|
|
|
|
static inline bool need_swap(void)
|
|
|
|
{
|
|
|
|
/* the SMP case will be handled in C based z_swap() */
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
return true;
|
|
|
|
#else
|
|
|
|
struct k_thread *new_thread;
|
|
|
|
|
|
|
|
/* Check if the next ready thread is the same as the current thread */
|
|
|
|
new_thread = z_get_next_ready_thread();
|
|
|
|
return new_thread != _current;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2020-08-10 21:47:02 +02:00
|
|
|
if (resched(key.key) && need_swap()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap(lock, key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
k_spin_unlock(lock, key);
|
|
|
|
}
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
void z_reschedule_irqlock(uint32_t key)
|
2018-07-24 22:37:59 +02:00
|
|
|
{
|
2019-05-24 19:09:13 +02:00
|
|
|
if (resched(key)) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap_irqlock(key);
|
2018-07-24 22:37:59 +02:00
|
|
|
} else {
|
|
|
|
irq_unlock(key);
|
|
|
|
}
|
kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-26 19:54:40 +02:00
|
|
|
}
|
|
|
|
|
2016-11-10 20:46:58 +01:00
|
|
|
void k_sched_lock(void)
|
|
|
|
{
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_sched_lock();
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
2016-11-10 20:46:58 +01:00
|
|
|
}
|
|
|
|
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
void k_sched_unlock(void)
|
|
|
|
{
|
2016-12-14 20:34:29 +01:00
|
|
|
#ifdef CONFIG_PREEMPT_ENABLED
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2020-02-06 22:39:52 +01:00
|
|
|
__ASSERT(_current->base.sched_locked != 0, "");
|
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
|
|
|
|
2018-05-21 20:48:35 +02:00
|
|
|
++_current->base.sched_locked;
|
2019-07-31 04:19:08 +02:00
|
|
|
update_cache(0);
|
2018-05-21 20:48:35 +02:00
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-12-02 16:24:08 +01:00
|
|
|
LOG_DBG("scheduler unlocked (%p:%d)",
|
2016-11-18 22:08:24 +01:00
|
|
|
_current, _current->base.sched_locked);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
2016-12-14 20:34:29 +01:00
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifdef CONFIG_SMP
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_get_next_ready_thread(void)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
|
|
|
struct k_thread *ret = 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2018-05-03 23:51:49 +02:00
|
|
|
ret = next_up();
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-02-20 02:24:30 +01:00
|
|
|
/* Just a wrapper around _current = xxx with tracing */
|
|
|
|
static inline void set_current(struct k_thread *new_thread)
|
|
|
|
{
|
2020-02-06 22:39:52 +01:00
|
|
|
_current_cpu->current = new_thread;
|
2019-02-20 02:24:30 +01:00
|
|
|
}
|
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifdef CONFIG_USE_SWITCH
|
2019-03-08 22:19:05 +01:00
|
|
|
void *z_get_next_switch_handle(void *interrupted)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
_current->switch_handle = interrupted;
|
|
|
|
|
2019-03-30 00:25:27 +01:00
|
|
|
z_check_stack_sentinel();
|
|
|
|
|
2018-05-30 20:23:02 +02:00
|
|
|
#ifdef CONFIG_SMP
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = next_up();
|
2018-05-30 20:23:02 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (_current != thread) {
|
|
|
|
update_metairq_preempt(thread);
|
2019-11-13 18:41:52 +01:00
|
|
|
|
2019-08-17 06:29:26 +02:00
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
z_reset_time_slice();
|
|
|
|
#endif
|
2018-05-30 20:23:02 +02:00
|
|
|
_current_cpu->swap_ok = 0;
|
2019-12-19 14:19:45 +01:00
|
|
|
set_current(thread);
|
2019-12-13 11:24:56 +01:00
|
|
|
#ifdef CONFIG_SPIN_VALIDATE
|
2019-02-20 19:07:31 +01:00
|
|
|
/* Changed _current! Update the spinlock
|
|
|
|
* bookeeping so the validation doesn't get
|
|
|
|
* confused when the "wrong" thread tries to
|
|
|
|
* release the lock.
|
|
|
|
*/
|
|
|
|
z_spin_lock_set_owner(&sched_spinlock);
|
|
|
|
#endif
|
2018-05-30 20:23:02 +02:00
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2018-05-30 20:23:02 +02:00
|
|
|
#else
|
2019-02-20 02:24:30 +01:00
|
|
|
set_current(z_get_next_ready_thread());
|
2018-05-30 20:23:02 +02:00
|
|
|
#endif
|
|
|
|
|
2020-02-04 22:45:54 +01:00
|
|
|
wait_for_switch(_current);
|
2018-05-03 23:51:49 +02:00
|
|
|
return _current->switch_handle;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.
- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
shift since they are power-of-two factors of 1000.
In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.
These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).
Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.
Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-20 20:39:08 +01:00
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_priq_dumb_add(sys_dlist_t *pq, struct k_thread *thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
struct k_thread *t;
|
2018-03-09 22:29:00 +01:00
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
2016-12-24 01:34:41 +01:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
SYS_DLIST_FOR_EACH_CONTAINER(pq, t, base.qnode_dlist) {
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_t1_higher_prio_than_t2(thread, t)) {
|
2019-01-28 18:35:27 +01:00
|
|
|
sys_dlist_insert(&t->base.qnode_dlist,
|
|
|
|
&thread->base.qnode_dlist);
|
2018-05-03 23:51:49 +02:00
|
|
|
return;
|
|
|
|
}
|
2018-03-26 18:24:41 +02:00
|
|
|
}
|
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
sys_dlist_append(pq, &thread->base.qnode_dlist);
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_dumb_remove(sys_dlist_t *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_SWAP_NONATOMIC) && defined(CONFIG_SCHED_DUMB)
|
|
|
|
if (pq == &_kernel.ready_q.runq && thread == _current &&
|
2019-03-08 22:19:05 +01:00
|
|
|
z_is_thread_prevented_from_running(thread)) {
|
2019-02-26 06:17:29 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
sys_dlist_remove(&thread->base.qnode_dlist);
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_dumb_best(sys_dlist_t *pq)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-11-16 00:03:32 +01:00
|
|
|
sys_dnode_t *n = sys_dlist_peek_head(pq);
|
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_dlist);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
bool z_priq_rb_lessthan(struct rbnode *a, struct rbnode *b)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread_a, *thread_b;
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
thread_a = CONTAINER_OF(a, struct k_thread, base.qnode_rb);
|
|
|
|
thread_b = CONTAINER_OF(b, struct k_thread, base.qnode_rb);
|
2018-05-03 23:51:49 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_t1_higher_prio_than_t2(thread_a, thread_b)) {
|
2018-09-21 00:43:57 +02:00
|
|
|
return true;
|
2019-12-19 14:19:45 +01:00
|
|
|
} else if (z_is_t1_higher_prio_than_t2(thread_b, thread_a)) {
|
2018-09-21 00:43:57 +02:00
|
|
|
return false;
|
2018-05-03 23:51:49 +02:00
|
|
|
} else {
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread_a->base.order_key < thread_b->base.order_key
|
|
|
|
? 1 : 0;
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_rb_add(struct _priq_rb *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
|
|
|
struct k_thread *t;
|
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
2018-05-03 23:51:49 +02:00
|
|
|
|
|
|
|
thread->base.order_key = pq->next_order_key++;
|
|
|
|
|
|
|
|
/* Renumber at wraparound. This is tiny code, and in practice
|
|
|
|
* will almost never be hit on real systems. BUT on very
|
|
|
|
* long-running systems where a priq never completely empties
|
|
|
|
* AND that contains very large numbers of threads, it can be
|
|
|
|
* a latency glitch to loop over all the threads like this.
|
|
|
|
*/
|
|
|
|
if (!pq->next_order_key) {
|
|
|
|
RB_FOR_EACH_CONTAINER(&pq->tree, t, base.qnode_rb) {
|
|
|
|
t->base.order_key = pq->next_order_key++;
|
2016-12-24 01:34:41 +01:00
|
|
|
}
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
rb_insert(&pq->tree, &thread->base.qnode_rb);
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_priq_rb_remove(struct _priq_rb *pq, struct k_thread *thread)
|
2018-05-03 23:51:49 +02:00
|
|
|
{
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_SWAP_NONATOMIC) && defined(CONFIG_SCHED_SCALABLE)
|
|
|
|
if (pq == &_kernel.ready_q.runq && thread == _current &&
|
2019-03-08 22:19:05 +01:00
|
|
|
z_is_thread_prevented_from_running(thread)) {
|
2019-02-26 06:17:29 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
2019-09-22 03:36:23 +02:00
|
|
|
__ASSERT_NO_MSG(!z_is_idle_thread_object(thread));
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
rb_remove(&pq->tree, &thread->base.qnode_rb);
|
2016-11-24 04:15:44 +01:00
|
|
|
|
2018-05-03 23:51:49 +02:00
|
|
|
if (!pq->tree.root) {
|
|
|
|
pq->next_order_key = 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_rb_best(struct _priq_rb *pq)
|
2018-04-03 03:24:58 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-05-03 23:51:49 +02:00
|
|
|
struct rbnode *n = rb_get_min(&pq->tree);
|
2018-04-03 03:24:58 +02:00
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_rb);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-04-03 03:24:58 +02:00
|
|
|
}
|
|
|
|
|
2018-06-28 19:38:14 +02:00
|
|
|
#ifdef CONFIG_SCHED_MULTIQ
|
|
|
|
# if (K_LOWEST_THREAD_PRIO - K_HIGHEST_THREAD_PRIO) > 31
|
|
|
|
# error Too many priorities for multiqueue scheduler (max 32)
|
|
|
|
# endif
|
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_priq_mq_add(struct _priq_mq *pq, struct k_thread *thread)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
|
|
|
int priority_bit = thread->base.prio - K_HIGHEST_THREAD_PRIO;
|
|
|
|
|
|
|
|
sys_dlist_append(&pq->queues[priority_bit], &thread->base.qnode_dlist);
|
2019-02-26 19:14:04 +01:00
|
|
|
pq->bitmask |= BIT(priority_bit);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
ALWAYS_INLINE void z_priq_mq_remove(struct _priq_mq *pq, struct k_thread *thread)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_SWAP_NONATOMIC) && defined(CONFIG_SCHED_MULTIQ)
|
|
|
|
if (pq == &_kernel.ready_q.runq && thread == _current &&
|
2019-03-08 22:19:05 +01:00
|
|
|
z_is_thread_prevented_from_running(thread)) {
|
2019-02-26 06:17:29 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
2018-06-28 19:38:14 +02:00
|
|
|
int priority_bit = thread->base.prio - K_HIGHEST_THREAD_PRIO;
|
|
|
|
|
|
|
|
sys_dlist_remove(&thread->base.qnode_dlist);
|
|
|
|
if (sys_dlist_is_empty(&pq->queues[priority_bit])) {
|
2019-02-26 19:14:04 +01:00
|
|
|
pq->bitmask &= ~BIT(priority_bit);
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
struct k_thread *z_priq_mq_best(struct _priq_mq *pq)
|
2018-06-28 19:38:14 +02:00
|
|
|
{
|
|
|
|
if (!pq->bitmask) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = NULL;
|
2018-06-28 19:38:14 +02:00
|
|
|
sys_dlist_t *l = &pq->queues[__builtin_ctz(pq->bitmask)];
|
2018-11-16 00:03:32 +01:00
|
|
|
sys_dnode_t *n = sys_dlist_peek_head(l);
|
2018-06-28 19:38:14 +02:00
|
|
|
|
2019-01-04 06:36:28 +01:00
|
|
|
if (n != NULL) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread = CONTAINER_OF(n, struct k_thread, base.qnode_dlist);
|
2019-01-04 06:36:28 +01:00
|
|
|
}
|
2019-12-19 14:19:45 +01:00
|
|
|
return thread;
|
2018-06-28 19:38:14 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_unpend_all(_wait_q_t *wait_q)
|
2018-05-10 18:45:42 +02:00
|
|
|
{
|
2018-05-10 20:10:34 +02:00
|
|
|
int need_sched = 0;
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread;
|
2018-05-10 18:45:42 +02:00
|
|
|
|
2019-12-19 14:19:45 +01:00
|
|
|
while ((thread = z_waitq_head(wait_q)) != NULL) {
|
|
|
|
z_unpend_thread(thread);
|
|
|
|
z_ready_thread(thread);
|
2018-05-10 18:45:42 +02:00
|
|
|
need_sched = 1;
|
|
|
|
}
|
2018-05-10 20:10:34 +02:00
|
|
|
|
|
|
|
return need_sched;
|
2018-05-10 18:45:42 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_sched_init(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2018-05-03 23:51:49 +02:00
|
|
|
#ifdef CONFIG_SCHED_DUMB
|
|
|
|
sys_dlist_init(&_kernel.ready_q.runq);
|
2018-06-28 19:38:14 +02:00
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_SCALABLE
|
2018-05-03 23:51:49 +02:00
|
|
|
_kernel.ready_q.runq = (struct _priq_rb) {
|
|
|
|
.tree = {
|
2019-03-08 22:19:05 +01:00
|
|
|
.lessthan_fn = z_priq_rb_lessthan,
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
|
|
|
};
|
|
|
|
#endif
|
2018-06-28 19:38:14 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_MULTIQ
|
|
|
|
for (int i = 0; i < ARRAY_SIZE(_kernel.ready_q.runq.queues); i++) {
|
|
|
|
sys_dlist_init(&_kernel.ready_q.runq.queues[i]);
|
|
|
|
}
|
|
|
|
#endif
|
2018-07-26 14:56:39 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_TIMESLICING
|
|
|
|
k_sched_time_slice_set(CONFIG_TIMESLICE_SIZE,
|
|
|
|
CONFIG_TIMESLICE_PRIORITY);
|
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_impl_k_thread_priority_get(k_tid_t thread)
|
2016-10-07 20:41:34 +02:00
|
|
|
{
|
2016-11-08 16:36:50 +01:00
|
|
|
return thread->base.prio;
|
2016-10-07 20:41:34 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline int z_vrfy_k_thread_priority_get(k_tid_t thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
return z_impl_k_thread_priority_get(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_priority_get_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_thread_priority_set(k_tid_t tid, int prio)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2016-11-08 21:44:05 +01:00
|
|
|
/*
|
|
|
|
* Use NULL, since we cannot know what the entry point is (we do not
|
|
|
|
* keep track of it) and idle cannot change its priority.
|
|
|
|
*/
|
2019-03-08 22:19:05 +01:00
|
|
|
Z_ASSERT_VALID_PRIO(prio, NULL);
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2016-10-13 14:10:07 +02:00
|
|
|
struct k_thread *thread = (struct k_thread *)tid;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_thread_priority_set(thread, prio);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_thread_priority_set(k_tid_t thread, int prio)
|
2017-09-29 23:00:48 +02:00
|
|
|
{
|
2018-05-05 00:57:57 +02:00
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
Z_OOPS(Z_SYSCALL_VERIFY_MSG(_is_valid_prio(prio, NULL),
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
"invalid thread priority %d", prio));
|
2020-05-27 18:26:57 +02:00
|
|
|
Z_OOPS(Z_SYSCALL_VERIFY_MSG((int8_t)prio >= thread->base.prio,
|
2018-05-05 00:57:57 +02:00
|
|
|
"thread priority may only be downgraded (%d < %d)",
|
|
|
|
prio, thread->base.prio));
|
2017-10-08 19:11:24 +02:00
|
|
|
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
z_impl_k_thread_priority_set(thread, prio);
|
2017-09-29 23:00:48 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_thread_priority_set_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2018-05-15 20:06:25 +02:00
|
|
|
#ifdef CONFIG_SCHED_DEADLINE
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
thread->base.prio_deadline = k_cycle_get_32() + deadline;
|
|
|
|
if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
_priq_run_add(&_kernel.ready_q.runq, thread);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2019-08-13 20:34:34 +02:00
|
|
|
static inline void z_vrfy_k_thread_deadline_set(k_tid_t tid, int deadline)
|
2018-05-15 20:06:25 +02:00
|
|
|
{
|
2019-12-19 14:19:45 +01:00
|
|
|
struct k_thread *thread = tid;
|
2018-05-15 20:06:25 +02:00
|
|
|
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
Z_OOPS(Z_SYSCALL_VERIFY_MSG(deadline > 0,
|
|
|
|
"invalid thread deadline %d",
|
|
|
|
(int)deadline));
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
z_impl_k_thread_deadline_set((k_tid_t)thread, deadline);
|
2018-05-15 20:06:25 +02:00
|
|
|
}
|
2019-08-13 20:34:34 +02:00
|
|
|
#include <syscalls/k_thread_deadline_set_mrsh.c>
|
2018-05-15 20:06:25 +02:00
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_yield(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-09-22 03:36:23 +02:00
|
|
|
if (!z_is_idle_thread_object(_current)) {
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-02-21 20:09:39 +01:00
|
|
|
if (!IS_ENABLED(CONFIG_SMP) ||
|
|
|
|
z_is_thread_queued(_current)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq,
|
|
|
|
_current);
|
|
|
|
}
|
2019-08-16 22:14:51 +02:00
|
|
|
_priq_run_add(&_kernel.ready_q.runq, _current);
|
|
|
|
z_mark_thread_as_queued(_current);
|
2018-05-21 20:48:35 +02:00
|
|
|
update_cache(1);
|
|
|
|
}
|
2018-05-03 23:51:49 +02:00
|
|
|
}
|
2019-03-08 22:19:05 +01:00
|
|
|
z_swap_unlocked();
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_yield(void)
|
|
|
|
{
|
|
|
|
z_impl_k_yield();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_yield_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
static int32_t z_tick_sleep(int32_t ticks)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2016-12-14 21:24:12 +01:00
|
|
|
#ifdef CONFIG_MULTITHREADING
|
2020-05-27 18:26:57 +02:00
|
|
|
uint32_t expected_wakeup_time;
|
2016-12-02 15:31:08 +01:00
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-12-02 16:24:08 +01:00
|
|
|
LOG_DBG("thread %p for %d ticks", _current, ticks);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2016-12-10 01:57:17 +01:00
|
|
|
/* wait of 0 ms is treated as a 'yield' */
|
2019-05-08 22:22:46 +02:00
|
|
|
if (ticks == 0) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
k_yield();
|
2018-10-25 17:45:08 +02:00
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
k_timeout_t timeout;
|
|
|
|
|
|
|
|
#ifndef CONFIG_LEGACY_TIMEOUT_API
|
|
|
|
timeout = Z_TIMEOUT_TICKS(ticks);
|
|
|
|
#else
|
2019-05-08 22:22:46 +02:00
|
|
|
ticks += _TICK_ALIGN;
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
timeout = (k_ticks_t) ticks;
|
|
|
|
#endif
|
|
|
|
|
2018-10-25 17:45:08 +02:00
|
|
|
expected_wakeup_time = ticks + z_tick_get_32();
|
2019-02-06 00:36:01 +01:00
|
|
|
|
|
|
|
/* Spinlock purely for local interrupt locking to prevent us
|
|
|
|
* from being interrupted while _current is in an intermediate
|
|
|
|
* state. Should unify this implementation with pend().
|
|
|
|
*/
|
|
|
|
struct k_spinlock local_lock = {};
|
|
|
|
k_spinlock_key_t key = k_spin_lock(&local_lock);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-02-26 06:17:29 +01:00
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
|
|
|
#endif
|
2019-03-08 22:19:05 +01:00
|
|
|
z_remove_thread_from_ready_q(_current);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
z_add_thread_timeout(_current, timeout);
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_suspended(_current);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
(void)z_swap(&local_lock, key);
|
2018-10-25 17:45:08 +02:00
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
__ASSERT(!z_is_thread_state_set(_current, _THREAD_SUSPENDED), "");
|
|
|
|
|
2018-10-25 17:45:08 +02:00
|
|
|
ticks = expected_wakeup_time - z_tick_get_32();
|
|
|
|
if (ticks > 0) {
|
2019-05-08 22:22:46 +02:00
|
|
|
return ticks;
|
2018-10-25 17:45:08 +02:00
|
|
|
}
|
2016-12-14 21:24:12 +01:00
|
|
|
#endif
|
2018-10-25 17:45:08 +02:00
|
|
|
|
|
|
|
return 0;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_sleep(k_timeout_t timeout)
|
2019-05-08 22:22:46 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
k_ticks_t ticks;
|
2019-05-08 22:22:46 +02:00
|
|
|
|
2019-12-12 23:07:07 +01:00
|
|
|
__ASSERT(!arch_is_in_isr(), "");
|
2020-08-03 05:34:47 +02:00
|
|
|
sys_trace_void(SYS_TRACE_ID_SLEEP);
|
2019-12-12 23:07:07 +01:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (K_TIMEOUT_EQ(timeout, K_FOREVER)) {
|
2019-11-08 19:44:22 +01:00
|
|
|
k_thread_suspend(_current);
|
2020-05-27 18:26:57 +02:00
|
|
|
return (int32_t) K_TICKS_FOREVER;
|
2019-11-08 19:44:22 +01:00
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
#ifdef CONFIG_LEGACY_TIMEOUT_API
|
|
|
|
ticks = k_ms_to_ticks_ceil32(timeout);
|
|
|
|
#else
|
|
|
|
ticks = timeout.ticks;
|
|
|
|
#endif
|
|
|
|
|
2019-05-08 22:22:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2020-08-03 05:34:47 +02:00
|
|
|
sys_trace_end_call(SYS_TRACE_ID_SLEEP);
|
2019-10-03 20:43:10 +02:00
|
|
|
return k_ticks_to_ms_floor64(ticks);
|
2019-05-08 22:22:46 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_sleep(k_timeout_t timeout)
|
2017-09-27 23:45:10 +02:00
|
|
|
{
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
return z_impl_k_sleep(timeout);
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_sleep_mrsh.c>
|
2019-05-10 01:46:46 +02:00
|
|
|
#endif
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t z_impl_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
2020-05-27 18:26:57 +02:00
|
|
|
int32_t ticks;
|
2019-05-10 01:46:46 +02:00
|
|
|
|
2019-10-03 20:43:10 +02:00
|
|
|
ticks = k_us_to_ticks_ceil64(us);
|
2019-05-10 01:46:46 +02:00
|
|
|
ticks = z_tick_sleep(ticks);
|
2019-10-03 20:43:10 +02:00
|
|
|
return k_ticks_to_us_floor64(ticks);
|
2019-05-10 01:46:46 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
2020-05-27 18:26:57 +02:00
|
|
|
static inline int32_t z_vrfy_k_usleep(int us)
|
2019-05-10 01:46:46 +02:00
|
|
|
{
|
|
|
|
return z_impl_k_usleep(us);
|
2017-09-27 23:45:10 +02:00
|
|
|
}
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
#include <syscalls/k_usleep_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
void z_impl_k_wakeup(k_tid_t thread)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_is_thread_pending(thread)) {
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
if (z_abort_thread_timeout(thread) < 0) {
|
2019-11-08 19:44:22 +01:00
|
|
|
/* Might have just been sleeping forever */
|
|
|
|
if (thread->base.thread_state != _THREAD_SUSPENDED) {
|
|
|
|
return;
|
|
|
|
}
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-03-22 18:30:19 +01:00
|
|
|
z_mark_thread_as_not_suspended(thread);
|
2019-03-08 22:19:05 +01:00
|
|
|
z_ready_thread(thread);
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
|
2020-02-04 22:52:09 +01:00
|
|
|
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
|
|
|
|
arch_sched_ipi();
|
|
|
|
#endif
|
|
|
|
|
2019-11-07 21:43:29 +01:00
|
|
|
if (!arch_is_in_isr()) {
|
2019-03-08 22:19:05 +01:00
|
|
|
z_reschedule_unlocked();
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
2019-02-20 01:03:39 +01:00
|
|
|
}
|
|
|
|
|
2020-05-28 05:29:50 +02:00
|
|
|
#ifdef CONFIG_TRACE_SCHED_IPI
|
|
|
|
extern void z_trace_sched_ipi(void);
|
|
|
|
#endif
|
|
|
|
|
2019-02-20 01:03:39 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
void z_sched_ipi(void)
|
|
|
|
{
|
sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.
Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.
This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-10 03:55:07 +01:00
|
|
|
/* NOTE: When adding code to this, make sure this is called
|
|
|
|
* at appropriate location when !CONFIG_SCHED_IPI_SUPPORTED.
|
|
|
|
*/
|
2020-05-28 05:29:50 +02:00
|
|
|
#ifdef CONFIG_TRACE_SCHED_IPI
|
|
|
|
z_trace_sched_ipi();
|
|
|
|
#endif
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2019-02-20 01:03:39 +01:00
|
|
|
void z_sched_abort(struct k_thread *thread)
|
|
|
|
{
|
2019-10-14 16:14:28 +02:00
|
|
|
k_spinlock_key_t key;
|
|
|
|
|
2019-02-20 01:03:39 +01:00
|
|
|
if (thread == _current) {
|
|
|
|
z_remove_thread_from_ready_q(thread);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* First broadcast an IPI to the other CPUs so they can stop
|
|
|
|
* it locally. Not all architectures support that, alas. If
|
|
|
|
* we don't have it, we need to wait for some other interrupt.
|
|
|
|
*/
|
2020-01-14 15:26:10 +01:00
|
|
|
key = k_spin_lock(&sched_spinlock);
|
2019-02-20 01:03:39 +01:00
|
|
|
thread->base.thread_state |= _THREAD_ABORTING;
|
2020-01-14 15:26:10 +01:00
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
2019-02-20 01:03:39 +01:00
|
|
|
#ifdef CONFIG_SCHED_IPI_SUPPORTED
|
2019-11-07 21:43:29 +01:00
|
|
|
arch_sched_ipi();
|
2019-02-20 01:03:39 +01:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Wait for it to be flagged dead either by the CPU it was
|
|
|
|
* running on or because we caught it idle in the queue
|
|
|
|
*/
|
2019-03-27 02:57:45 +01:00
|
|
|
while ((thread->base.thread_state & _THREAD_DEAD) == 0U) {
|
2019-10-14 16:14:28 +02:00
|
|
|
key = k_spin_lock(&sched_spinlock);
|
|
|
|
if (z_is_thread_prevented_from_running(thread)) {
|
|
|
|
__ASSERT(!z_is_thread_queued(thread), "");
|
|
|
|
thread->base.thread_state |= _THREAD_DEAD;
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
} else if (z_is_thread_queued(thread)) {
|
|
|
|
_priq_run_remove(&_kernel.ready_q.runq, thread);
|
|
|
|
z_mark_thread_as_not_queued(thread);
|
|
|
|
thread->base.thread_state |= _THREAD_DEAD;
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
} else {
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
k_busy_wait(100);
|
2019-02-20 01:03:39 +01:00
|
|
|
}
|
|
|
|
}
|
2020-01-14 15:26:10 +01:00
|
|
|
|
|
|
|
/* If the thread self-aborted (e.g. its own exit raced with
|
|
|
|
* this external abort) then even though it is flagged DEAD,
|
|
|
|
* it's still running until its next swap and thus the thread
|
|
|
|
* object is still in use. We have to resort to a fallback
|
|
|
|
* delay in that circumstance.
|
|
|
|
*/
|
|
|
|
if ((thread->base.thread_state & _THREAD_ABORTED_IN_ISR) == 0U) {
|
|
|
|
k_busy_wait(THREAD_ABORT_DELAY_US);
|
|
|
|
}
|
2019-02-20 01:03:39 +01:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-09-29 23:00:48 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline void z_vrfy_k_wakeup(k_tid_t thread)
|
|
|
|
{
|
|
|
|
Z_OOPS(Z_SYSCALL_OBJ(thread, K_OBJ_THREAD));
|
|
|
|
z_impl_k_wakeup(thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_wakeup_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
k_tid_t z_impl_k_current_get(void)
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
{
|
2020-02-06 22:39:52 +01:00
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/* In SMP, _current is a field read from _current_cpu, which
|
|
|
|
* can race with preemption before it is read. We must lock
|
|
|
|
* local interrupts when reading it.
|
|
|
|
*/
|
|
|
|
unsigned int k = arch_irq_lock();
|
|
|
|
#endif
|
|
|
|
|
|
|
|
k_tid_t ret = _current_cpu->current;
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
arch_irq_unlock(k);
|
|
|
|
#endif
|
|
|
|
return ret;
|
unified: initial unified kernel implementation
Summary of what this includes:
initialization:
Copy from nano_init.c, with the following changes:
- the main thread is the continuation of the init thread, but an idle
thread is created as well
- _main() initializes threads in groups and starts the EXE group
- the ready queues are initialized
- the main thread is marked as non-essential once the system init is
done
- a weak main() symbol is provided if the application does not provide a
main() function
scheduler:
Not an exhaustive list, but basically provide primitives for:
- adding/removing a thread to/from a wait queue
- adding/removing a thread to/from the ready queue
- marking thread as ready
- locking/unlocking the scheduler
- instead of locking interrupts
- getting/setting thread priority
- checking what state (coop/preempt) a thread is currenlty running in
- rescheduling threads
- finding what thread is the next to run
- yielding/sleeping/aborting sleep
- finding the current thread
threads:
- Add operationns on threads, such as creating and starting them.
standardized handling of kernel object return codes:
- Kernel objects now cause _Swap() to return the following values:
0 => operation successful
-EAGAIN => operation timed out
-Exxxxx => operation failed for another reason
- The thread's swap_data field can be used to return any additional
information required to complete the operation, such as the actual
result of a successful operation.
timeouts:
- same as nano timeouts, renamed to simply 'timeouts'
- the kernel is still tick-based, but objects take timeout values in
ms for forward compatibility with a tickless kernel.
semaphores:
- Port of the nanokernel semaphores, which have the same basic behaviour
as the microkernel ones. Semaphore groups are not yet implemented.
- These semaphores are enhanced in that they accept an initial count and a
count limit. This allows configuring them as binary semaphores, and also
provisioning them without having to "give" the semaphore multiple times
before using them.
mutexes:
- Straight port of the microkernel mutexes. An init function is added to
allow defining them at runtime.
pipes:
- straight port
timers:
- amalgamation of nano and micro timers, with all functionalities
intact.
events:
- re-implementation, using semaphores and workqueues.
mailboxes:
- straight port
message queues:
- straight port of microkernel FIFOs
memory maps:
- straight port
workqueues:
- Basically, have all APIs follow the k_ naming rule, and use the _timeout
subsystem from the unified kernel directory, and not the _nano_timeout
one.
stacks:
- Port of the nanokernel stacks. They can now have multiple threads
pending on them and threads can wait with a timeout.
LIFOs:
- Straight port of the nanokernel LIFOs.
FIFOs:
- Straight port of the nanokernel FIFOs.
Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Peter Mitsis <peter.mitsis@windriver.com>
Allan Stephens <allan.stephens@windriver.com>
Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Id3cadb3694484ab2ca467889cfb029be3cd3a7d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-03 00:55:39 +02:00
|
|
|
}
|
|
|
|
|
2017-09-27 23:45:10 +02:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline k_tid_t z_vrfy_k_current_get(void)
|
|
|
|
{
|
|
|
|
return z_impl_k_current_get();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_current_get_mrsh.c>
|
2017-09-27 23:45:10 +02:00
|
|
|
#endif
|
|
|
|
|
2019-03-08 22:19:05 +01:00
|
|
|
int z_impl_k_is_preempt_thread(void)
|
2016-11-10 21:54:27 +01:00
|
|
|
{
|
2019-11-07 21:43:29 +01:00
|
|
|
return !arch_is_in_isr() && is_preempt(_current);
|
2016-11-10 21:54:27 +01:00
|
|
|
}
|
2017-09-29 23:00:48 +02:00
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-06 22:34:31 +02:00
|
|
|
static inline int z_vrfy_k_is_preempt_thread(void)
|
|
|
|
{
|
|
|
|
return z_impl_k_is_preempt_thread();
|
|
|
|
}
|
|
|
|
#include <syscalls/k_is_preempt_thread_mrsh.c>
|
2017-09-29 23:00:48 +02:00
|
|
|
#endif
|
2019-01-31 00:00:42 +01:00
|
|
|
|
|
|
|
#ifdef CONFIG_SCHED_CPU_MASK
|
|
|
|
# ifdef CONFIG_SMP
|
|
|
|
/* Right now we use a single byte for this mask */
|
2020-03-12 16:16:00 +01:00
|
|
|
BUILD_ASSERT(CONFIG_MP_NUM_CPUS <= 8, "Too many CPUs for mask word");
|
2019-01-31 00:00:42 +01:00
|
|
|
# endif
|
|
|
|
|
|
|
|
|
2020-05-27 18:26:57 +02:00
|
|
|
static int cpu_mask_mod(k_tid_t thread, uint32_t enable_mask, uint32_t disable_mask)
|
2019-01-31 00:00:42 +01:00
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
|
2019-02-12 23:50:46 +01:00
|
|
|
LOCKED(&sched_spinlock) {
|
2019-12-19 14:19:45 +01:00
|
|
|
if (z_is_thread_prevented_from_running(thread)) {
|
|
|
|
thread->base.cpu_mask |= enable_mask;
|
|
|
|
thread->base.cpu_mask &= ~disable_mask;
|
2019-01-31 00:00:42 +01:00
|
|
|
} else {
|
|
|
|
ret = -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_clear(k_tid_t thread)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0, 0xffffffff);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_enable_all(k_tid_t thread)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0xffffffff, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_enable(k_tid_t thread, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, BIT(cpu), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
int k_thread_cpu_mask_disable(k_tid_t thread, int cpu)
|
|
|
|
{
|
|
|
|
return cpu_mask_mod(thread, 0, BIT(cpu));
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif /* CONFIG_SCHED_CPU_MASK */
|
2020-02-21 01:33:06 +01:00
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
int z_impl_k_thread_join(struct k_thread *thread, k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
|
|
|
k_spinlock_key_t key;
|
|
|
|
int ret;
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
__ASSERT(((arch_is_in_isr() == false) ||
|
|
|
|
K_TIMEOUT_EQ(timeout, K_NO_WAIT)), "");
|
2020-02-21 01:33:06 +01:00
|
|
|
|
|
|
|
key = k_spin_lock(&sched_spinlock);
|
|
|
|
|
|
|
|
if ((thread->base.pended_on == &_current->base.join_waiters) ||
|
|
|
|
(thread == _current)) {
|
|
|
|
ret = -EDEADLK;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((thread->base.thread_state & _THREAD_DEAD) != 0) {
|
|
|
|
ret = 0;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
if (K_TIMEOUT_EQ(timeout, K_NO_WAIT)) {
|
2020-02-21 01:33:06 +01:00
|
|
|
ret = -EBUSY;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
|
|
|
|
pending_current = _current;
|
|
|
|
#endif
|
|
|
|
add_to_waitq_locked(_current, &thread->base.join_waiters);
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
add_thread_timeout(_current, timeout);
|
2020-02-21 01:33:06 +01:00
|
|
|
|
|
|
|
return z_swap(&sched_spinlock, key);
|
|
|
|
out:
|
|
|
|
k_spin_unlock(&sched_spinlock, key);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_USERSPACE
|
|
|
|
/* Special case: don't oops if the thread is uninitialized. This is because
|
|
|
|
* the initialization bit does double-duty for thread objects; if false, means
|
|
|
|
* the thread object is truly uninitialized, or the thread ran and exited for
|
|
|
|
* some reason.
|
|
|
|
*
|
|
|
|
* Return true in this case indicating we should just do nothing and return
|
|
|
|
* success to the caller.
|
|
|
|
*/
|
|
|
|
static bool thread_obj_validate(struct k_thread *thread)
|
|
|
|
{
|
2020-03-11 15:13:07 +01:00
|
|
|
struct z_object *ko = z_object_find(thread);
|
2020-02-21 01:33:06 +01:00
|
|
|
int ret = z_object_validate(ko, K_OBJ_THREAD, _OBJ_INIT_TRUE);
|
|
|
|
|
|
|
|
switch (ret) {
|
|
|
|
case 0:
|
|
|
|
return false;
|
|
|
|
case -EINVAL:
|
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
#ifdef CONFIG_LOG
|
|
|
|
z_dump_object_error(ret, thread, ko, K_OBJ_THREAD);
|
|
|
|
#endif
|
|
|
|
Z_OOPS(Z_SYSCALL_VERIFY_MSG(ret, "access denied"));
|
|
|
|
}
|
|
|
|
CODE_UNREACHABLE;
|
|
|
|
}
|
|
|
|
|
kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-06 00:18:14 +01:00
|
|
|
static inline int z_vrfy_k_thread_join(struct k_thread *thread,
|
|
|
|
k_timeout_t timeout)
|
2020-02-21 01:33:06 +01:00
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return z_impl_k_thread_join(thread, timeout);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_join_mrsh.c>
|
2020-03-25 00:09:24 +01:00
|
|
|
|
|
|
|
static inline void z_vrfy_k_thread_abort(k_tid_t thread)
|
|
|
|
{
|
|
|
|
if (thread_obj_validate(thread)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
Z_OOPS(Z_SYSCALL_VERIFY_MSG(!(thread->base.user_options & K_ESSENTIAL),
|
|
|
|
"aborting essential thread %p", thread));
|
|
|
|
|
|
|
|
z_impl_k_thread_abort((struct k_thread *)thread);
|
|
|
|
}
|
|
|
|
#include <syscalls/k_thread_abort_mrsh.c>
|
2020-02-21 01:33:06 +01:00
|
|
|
#endif /* CONFIG_USERSPACE */
|