Device pm runtime was using semaphore to protect critical section but
enable / disable functions were waiting on the semaphore. So, just
replace it with a spin lock.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The sync API was using k_poll_signal and in certain conditions is
possible multiple threads waiting on a signal leading to an undefined
behavior.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This uses bitarrays for allocating and deallocating virtual
addresses with k_mem_map() and k_mem_unmap(). This will
allow us to reuse virtual addresses.
Fixes#28900
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.
A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When we start allowing unmapping of memory region, there is no
exact way to know if k_mem_map() is called with guard page option
specified or not. So just unconditionally enable guard pages on
both sides of the memory region to hopefully catch access
violations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This provides a counterpart to z_phys_map() which can be used
to temporary map memory region during boot process, and
subsequently discards the mapping.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
work_timeout() is a function, a statement like "(void)work_timeout;"
has no effect.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
smp.c only has to be built if CONFIG_SMP is enabled. Remove
preprocessor checks from the file itself and update cmake rules
instead.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Usually Zephyr boots all secondary CPUs as a part of system
boot. Some applications however need an ability to boot on
the main CPU only and enable secondary CPUs selectively at
run-time. Add a Kconfig option to support this behaviour.
When booting CPUs on demand applications also need helpers
to initialise a dummy thread and begin threaded execution
on those CPUs, add two such helpers.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Avoid fetching files which use scheduler. By explicitly avoiding
including RTOS specific files we ensure that it is not fetched
accidently.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Ensure that k_heap is not attempt to block the thread when
timeout is set and space cannot be allocated.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Mem_slab supports allocation with timeout which blocks the context
if no slab is available. Updated to treat every timeout as K_NO_WAIT
when multithreading is disabled.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Updated timer to not touch thread/scheduler code when multithreading
is off.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
K_busy_wait is the only function from thread.c that is used when
CONFIG_MULTITHREADING=n. Moving to timeout since it fits better there
as it requires sys clock to be present.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Return NULL instead of return numeric zero for pointer type.
Current usage violates MISRA rule 11.9.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This renames the obj_list element in struct dyn_obj to
dobj_list, to avoid identifier collision with the static
obj_list defined in userspace.c.
Violation of MISRA rule 5.9.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
work_queue_main() was missing final else statement
in the if else if construct. This commit adds else {}
to comply with coding guideline 15.7. Includes a
context-specific description of why this branch is empty.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
z_timeout_end_calc() was missing final else statement
in the if else if construct. This commit pulls the last
condition into a final else {} to comply with guideline
15.7.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
register_events() and signal_poll_event() missing final
else statement in the if else if construct. This commit adds
else {} to comply with coding guideline 15.7.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Devices that do not require PM should just use NULL.
`device_pm_control_nop` is still kept as an alias to NULL untill all
in-tree usage is replaced with NULL.
Code relying on device_pm_control function now returns -ENOTSUP
(equivalent to calling device_pm_control_nop).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The return value is documented to be true if the work was pending, but
the implementation returned true only if the work was actually running
(i.e. the caller had to wait). It should also return true if
scheduled or submitted work was cancelled.
Note that this means the return value cannot be used to determine
whether the call slept.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Add the ability to define architecture specific structures, notably
the ability to extend struct _cpu with per-CPU arch-specific stuff that
can be accessed with _current_cpu->arch.* similarly to _current->arch.*
for per-thead architecture data.
This is opt-in for architectures that want to benefit from this,
otherwise empty defaults are provided. A placeholder for ARM64 is
included to show the pattern.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
There's a typedef for non-pointer values compatible with atomic
non-pointer objects. Add a similar typedef for pointer values, and
the corresponding macro for initializing atomic pointer types.
This also will simplify replacing the Zephyr atomic API with one
based on C11 atomics, should that be desirable. C11 atomic pointer
values are not void*.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This commit adds the ability to use a message queue as a
k_poll object. It follows the same pattern as polling on
FIFOs.
This change has been proven in practice at Samsara.
Fixes: #26728
Signed-off-by: Nick Graves <nicholas.graves@samsara.com>
This avoids contention between unrelated slabs and allows for
userspace accessible slabs when located in memory partitions.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a new kconfig CONFIG_TIMING_FUNCTIONS_NEED_AT_BOOT so
that the timing subsystem can be initialized at boot, instead of
being #ifdef under thread runtime statistics. This will allow
other part of kernel and other subsystems to utilize the timing
functions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.
Also extends this to gather per-thread demand paging
statistics.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_work_schedule() is supposed to be a no-op if the work item is
already scheduled or submitted: the previous schedule is left
unchanged. The check incorrectly inhibited the schedule operation
when the work item was neither scheduled nor submitted, but was
running.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The identifiers used in the declaration and definition of a function
shall be identical [MISRAC2012-RULE_8_3-b]
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.
Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
This symbol is reserved and usage of reserved symbols violates the
coding guidelines. (MISRA 21.2)
NAME
signal - ANSI C signal handling
SYNOPSIS
#include <signal.h>
sighandler_t signal(int signum, sighandler_t handler);
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This symbol is reserved and usage of reserved symbols violates the
coding guidelines. (MISRA 21.2)
NAME
remove - remove a file or directory
SYNOPSIS
#include <stdio.h>
int remove(const char *pathname);
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
both thread monitor and thread names are not EXPERIMENTAL any more. They
have been used across the tree and lots depend on those features
already.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Removed k_pipe_block_put and static functions only related to it.
After all the old usage of k_mem_block has been replaced by k_heap,
k_pipe_block_put still taking a deprecated k_mem_block as argument
becomes dead code. All APIs that hook it from kernel.h have been
confirmed to be removed. Since an asynchronous message descriptor
is only allocated in k_pipe_block_put, static functions for pipe_
async are removed as well.
Signed-off-by: Shihao Shen <shihao.shen@intel.com>
This feature predated the tickless kernel and has been in legacy mode
for a while. We now have no drivers or systems that do not support
tickless, so remove this option and cleanup the code to only use
tickless.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.
Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed. So remove
arch_mem_domain_destroy as well.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Remove k_mem_domain_destroy and k_mem_domain_remove_thread as they've
been deprecated for at least 2 releases now.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The internal function z_smp_reacquire_global_lock() has not used by
anywhere inside zephyr code, so remove it.
Fixes#33273.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
The static device dependencies from devicetree are not the only ones
that might be present at runtime. Add API that allows visiting
required devices without assuming that handles for or pointers to them
can be accessed as a static contiguous sequence.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Wrap arch_sched_ipi() call in z_thread_abort() with ifdef checking for
hardware support of IPI.
Fixes#32723
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Previously, a racing write to the provided string could result
in up to CONFIG_THREAD_MAX_NAME_LEN-2 bytes after the end
of user-accessible memory being leaked into the thread name.
For now, make a temporary copy. In an ideal world this could
copy directly from userspace into the thread name, but that
violates the current vrfy / impl split.
Signed-off-by: James Harris <james.harris@intel.com>
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions. That's needlessly slow, given that the low
level primitives are a two-instruction sequence. Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.
There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap. That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).
Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.
This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Commit 6b84ab3830 ("kernel/sched: Adjust locking in z_swap()") moved
the call to arch_cohere_stacks() out of the scheduler lock while doing
some reorgnizing. On further reflection, this is incorrect. When
done outside the lock, the two arch_cohere_stacks() calls will race
against each other.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Zephyr docs state that timers will act as one-shot timers when started
with a period of K_NO_WAIT or K_FOREVER. However the code adjusting
period was setting K_FOREVER timeout ticks to 1 which caused the timer
to expire every tick. This adds a check to not adjust K_FOREVER periods
Signed-off-by: Eric Johnson <eric@liveathos.com>
pm_system_resume is always implemented when PM is enabled. There is no
need to have this weak function under an ifdef PM.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
pm_system_resume_from_deep_sleep is not implemented or used
anywhere. Just remove it and keep the code base cleaner.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This function is useless and the state variable that it was
controlling is also not necessary because the same logic is being
handled by the variable post_ops_done.\
This reasonably simplifies idle thread logic.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
pm_system_suspend is called only from the idle thread and should
not be exported as a public API.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Previously, a k_sem_reset with any outstanding waiting threads would
result in the semaphore in an inconsistent state, with more threads
waiting in the wait_q than the count would indicate.
Explicitly -EAGAIN any waiting threads upon k_sem_reset, to
ensure safety here.
Signed-off-by: James Harris <james.harris@intel.com>
Currently there is no way to distinguish between a caller
explicitly asking for a semaphore with a limit that
happens to be `UINT_MAX` and a semaphore that just
has a limit "as large as possible".
Add `K_SEM_MAX_LIMIT`, currently defined to `UINT_MAX`, and akin
to `K_FOREVER` versus just passing some very large wait time.
In addition, the `k_sem_*` APIs were type-confused, where
the internal data structure was `uint32_t`, but the APIs took
and returned `unsigned int`. This changes the underlying data
structure to also use `unsigned int`, as changing the APIs
would be a (potentially) breaking change.
These changes are backwards-compatible, but it is strongly suggested
to take a quick scan for `k_sem_init` and `K_SEM_DEFINE` calls with
`UINT_MAX` (or `UINT32_MAX`) and replace them with `K_SEM_MAX_LIMIT`
where appropriate.
Signed-off-by: James Harris <james.harris@intel.com>
Due to the recent changes to scheduler z_find_first_thread_to_unpend
& z_remove_thread_from_ready_q are not used anymore. So removing the
dead code.
fixes: #32691
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
While I'm in the idle code, let's clean this loop up. It was a really
bad #ifdef hell:
* Remove the CONFIG_TICKLESS_IDLE_THRESH logic (and the kconfig),
which never did anything but needlessly increase latency.
* Move the needed timeout logic from the main loop into
pm_save_idle(), which eliminates the special case for
!SYS_CLOCK_EXISTS.
Behavior (modulo that one kconfig) should be completely unchanged, and
now the inner part of the idle loop looks like:
while (true) {
(void) arch_irq_lock();
if (IS_ENABLED(CONFIG_PM)) {
pm_save_idle();
} else {
k_cpu_idle();
}
}
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The removal of the abort handling also absconded with an IRQ lock that
is required for reliable operation in the idle loop. Put it back.
Once the idle loop has made a decision to enter idle, any interrupt
that arrives needs to be masked and delivered AFTER the system enters
idle. Otherwise we run the risk of races where the system accepts and
processes an interrupt that should have prevented idle, but then goes
to sleep anyway having already made the decision.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that the old API has been reimplemented with the new API remove
the old implementation and its tests.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Switch the default and clean up some test workarounds. This will enable
final conversions necessary to transition to the new API.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This commit provides a complete reimplementation of the work queue
infrastructure intended to eliminate the race conditions and feature
gaps in the existing implementation.
Both bare and delayable work structures are supported. Items can be
submitted; delayable items can be scheduled for submission at a future
time. Items can be delayed, queued, and running all at the same time.
A running item can also be canceling.
The new implementation:
* replaces "pending" with "busy" which identifies the active states;
* supports canceling delayed and submitted items;
* prevents resubmission of a item being canceled until cancellation
completes;
* supports waiting for cancellation to complete;
* supports flushing a work item (waiting for the last submission to
complete without preventing resubmission);
* supports waiting for a queue to drain (only allows resubmission from
the work thread);
* supports stopping a work queue in conjunction with draining it;
* prevents handler-reentrancy during resubmission.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Attempts to reimplement the existing work API using a new work
implementation failed, primarily due to heavy use of whitebox testing
in validating the original API. Add a temporary Kconfig that will
select between the two implementations so we can use the same
identifiers but select which implementation they reference.
This commit just adds the selection infrastructure and uses it to
conditionalize the existing implementation in anticipation of the new
one in the next commit.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
These functions are a subset of proposed public APIs to clean up
several issues related to safely handling waking of threads. They
have been made private as they interface may change, but their use
will simplify the reimplementation of the k_work functionality.
See: https://github.com/zephyrproject-rtos/zephyr/pull/29668
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Several internal APIs wrote thread attributes (return value, mainly)
_after_ calling `z_ready_thread`. This is unsafe, at least in SMP,
because another core could have already picked up and run the thread.
Fixes#32800.
Signed-off-by: James Harris <james.harris@intel.com>
`z_impl_k_yield` unlocked sched_spinlock, only to lock it again
immediately, do a little bit more work, then unlock it again.
This causes performance issues on SMP, where `sched_spinlock`
is often fairly highly contended and cores often end up spinning
for quite a while waiting to retake the lock in `z_swap_unlocked`.
Instead directly pass the spinlock key to `z_swap` and avoid the
extra lock+unlock.
Signed-off-by: James Harris <james.harris@intel.com>
`z_is_t1_higher_prio_than_t2` was being called twice in both the
context-switch fastpath and in `z_priq_rb_lessthan`, just to
dealing with priority ties. In addition, the API was error-prone
(and too much in the fastpath to be able to assert its invarients)
- see also #32710 for a previous example of this API breaking
and returning a>b but also b>a.
Replacing this with a direct 3-way comparison `z_cmp_t1_prio_with_t2`
sidesteps most of these issues. There is still a concern that
`sgn(z_cmp_t1_prio_with_t2(a,b)) != -sgn(z_cmp_t1_prio_with_t2(b,a))`
but I don't see any way to alleviate this aside from adding an
assert to the fastpath.
Signed-off-by: James Harris <james.harris@intel.com>
Previously two tasks with the same deadline and priority would
always have `z_is_t1_higher_prio_than_t2` `true` in both directions.
This is logically inconsistent, and results in `k_yield` not actually
yielding between identical threads.
Signed-off-by: James Harris <james.harris@intel.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.
Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This function would correctly suppress attempts to set timeouts that
were too soon for the driver or farther out than what was already set,
but when it actually set the timeout it would use the requested value
and not clamp it to the minimum of it and the current timeout
expiration, leading to "too-long" timeouts being set at the driver.
In uniprocessor configurations, that turns out to have been benign
because something else would always come back along when timeout state
changed and fix the broken value before the expiration.
But in SMP, this opens up races. For example, the idle thread on one
CPU can see that there are no active threads and schedule a maximum
value timeout at the same time as the other thread adds a new timeout
that expects a near-term expiration. The broken code here would see
that the new timeout exists, decide that yes it needs to override, but
then set the K_TICKS_FOREVER value it got from the idle thread!
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt. When this happens, drivers will kick the
requested interrupt out by one tick. This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].
And attempting to do that is dangerous anyway. If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving! The timeout layer actually has protection
against this case.
Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.
Don't try to be fancy. Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem. Adjust
the two tests that were explicitly requesting very short slice rates.
[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.
[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent work to normalize use of the thread QUEUED state bit means that
we never attempt to remove unqueued threads from the low-level run
queue. So the old workaround for SWAP_NONATOMIC that was trying to
detect this condition isn't necessary anymore.
Which is serendipitous, because it was written to encode some very
specific logic about the circumstances where _current could be
dequeued that I'd like to be able to break.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is part of the scheduler API, and was always just a synchronized
wrapper around the internal ready_thread() function. But where the
internal users seem to be careful not to call it on threads that are
not known to be already queued or running, the general users in the
IPC code seem to be less strict.
Add a simple test to detect the case where a thread is already
running. Right now this just loops over the array of CPUs, so is O(N)
in the CPU count even though N is never more than four for us
currently. But this is possible without modifying data structures. A
more scalable way to do this if we ever need to run on very parallel
systems would be to use another state bit for RUNNING, or to keep a
backpointer in the thread struct to the CPU it's running on, etc...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically). Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().
Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Aborted threads will cancel their timeouts, but the timeout subsystem
isn't protected under the same lock so it's possible for a timeout to
fire just as a thread is being aborted and wake it up unexpectedly.
Check the state before blowing anything up.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This got missed, leaving garbage there for restarted threads to trip
on. Actually I see multiple uninitialized fields, which seems odd.
This code deserves some rework, thread initialization isn't a
performance path and we should probably be zeroing the struct out.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() address
translation macros are flipped in their calculations.
The calculation is supposed to be:
virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
SRAM_BASE_ADDRESS)
So fix the them.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The computation was using the already-adjusted input value that
assumed relative timeouts and not the actual argument the user passed.
Absolute timeouts were consistently waking up one tick early.
Fixes#32499
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree. The
device dependency arrays are updated to use these handles.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Move the busy status from a global atomic bit sequence to atomic flags
in the device PM state. While this temporarily adds 4 bytes to each
PM structure the whole device PM infrastructure will be refactored and
it's likely the extra memory can be recovered.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Separate the state indicator of whether the initialization function
has been invoked from the success or failure of the initialization.
This allows precise confirmation that the device is ready (i.e. it has
been initialized, and that initialization succeeded).
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This avoids the need for distinct object that uses flash to store its
initializer. Instead the state is initialized when the kernel is
starting up, before anything can reference it. In future refactoring
the PM state could be accessed directly without storing an extra
pointer in the static device state.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Initialize all device objects in a batch before invoking any code that
might try to reference data in them. This eliminates a race condition
enabled by the ability to resolve a device structure at build time,
and reference it from one device's initialization routine before the
device itself has been initialized.
While the device is pulled from the sys_init records rather than
static devices, all in-tree init_entry records that are associated
with devices are produced via Z_DEVICE_DEFINE(), so there should be no
static devices that would be missed by instead iterating over the
device records.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
Some recent changes exposed some common "arch_switch() anti-patterns"
in various architectures. The documentation technically described
this all correctly, but probably wasn't as clear as it should have
been. Rewrite, making clear exactly what needs to happen and how the
fields should be interpreted.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.
Example:
* CPU0 is idle, CPU1 is running thread A
* CPU1 makes high priority thread B runnable
* CPU1 reaches a schedule point (or returns from an interrupt) and
decides to run thread B instead
* CPU0 simultaneously takes its IPI and returns, selecting thread A
Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable. So we have a deadlock, each CPU is spinning waiting for the
other!
Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation. The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears. I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly. In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.
The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.
Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync. Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity. The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments. No behavior
change.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL. So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!
In practice this was found on x86 which poisons the switch context
such that it crashes instantly.
Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.
Fixes#28105
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend(). Allow this in the coherence assertion.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The poll code uses a dummy wait queue so the threads have something to
block on, but the previous coherence pass (which rearranged things to
put the _poller data elsewhere) missed that this was on the stack,
which is not allowed. It actually has no use except as a list, so
make it a global static instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The z_swap_unlocked() function used a dummy spinlock for simplicity.
But this runs afouls of checking for stack-resident spinlocks
(forbidden when KERNEL_COHERENCE is set). And it's executing needless
code to release the lock anyway. Replace with a compile time NULL,
which will improve performance, correctness and code size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered. Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.
Fixes#32136
* I took a brief look. What seems to have happened here is that these
were originally synchronized via an implicit from an outer caller
(remember the original Uniprocessor irq_lock() API is a recursive
lock), and they were mostly implemented in terms of middle-level
calls that were themselves locked. So those got ported over to the
newer spinlock but the outer wrapper layer got forgotten.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This lets the linker tell us what kind of alignment is required
for both tdata and tbss data when copying them into stack.
If they are not aligned as expected by the toolchain, generated
code would be accessing incorrect location for thread variables.
Fixes#32015
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The linker script defines `z_mapped_size` as follows:
```
z_mapped_size = z_mapped_end - z_mapped_start;
```
This is done with the belief that precomputed values at link time will
make the code smaller and faster.
On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.
Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:
```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```
The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:
```
text data bss dec hex filename
1216 8 294936 296160 484e0 mmu.c.obj.arm64.before
1212 8 294936 296156 484dc mmu.c.obj.arm64.after
1110 8 9244 10362 287a mmu.c.obj.x86-64.before
1106 8 9244 10358 2876 mmu.c.obj.x86-64.after
```
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The SYS_CLOCK_TICKS_PER_SEC default may depend on the kernel config
for tickless, rather than the capability.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Activating K_FP_REGS flags introduces stack memory
overhead for the main thread in Cortex-M architecture.
Several ARM platforms experience main thread stack
overflows when building with FPU_SHARING=y.
Enabling FPU sharing in main thread should not be
the default configuration. Users are welcome to
enable FP sharing on the main thread in the
application code, in main().
This reverts commit 8453a73ede.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The call to arch_mem_coherent() inside spinlock.h
when spinlock validation and memory coherence enabled
is causing build error as spinlock.h does not include
kernel_arch_func.h directly. However, simply including
that file does not work either as this creates
the chicken-or-egg in the chain of include files.
In order to make spin validation work with kernel
coherence enabled, a separate function is created
to break the circular dependencies of include files.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There was an edge case in the timeout handling (exposed by, but not
strictly related to, the recent timeslice fix): the next_timeout()
computation would include time slice expiration as a clamp on the
result, but this would be invoked also on the z_set_timeout_expiry()
path which gets hooked on entry to a new thread which is needed to set
the timeout in the first place. So if no other timer interrupt was
scheduled, it was possible to miss the first timeslice interrupt after
thread scheduling.
The explanation is much longer than the fix (use <= as the comparator
instead of <).
In practice this was only being hit in the existing test suite on
riscv miv running under renode using non-default clock rates.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix an edge case that snuck in with the recent fix: if timeslicing is
enabled, the CPU's slice_ticks will be zero, and thus match a timeout
object's dticks value of zero, and thus get suppressed (because "we
already have a timeout scheduled for that") incorrectly.
Fixes#31789
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There are more and more tests that fail due to stackoverflow.
Increasing MAIN_STACK_SIZE to fix those issues.
Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
Time slices don't have a timeout struct associated and stored in
timeout_list. Time slice timeout is direct programmed in the system
clock and tracked in _current_cpu->slice_ticks.
There is one issue where the time slice timeout can be missed because
the system clock is re-programmed to a longer timeout. To this happens,
it is only necessary that the timeout_list is empty (any timeout set)
and a new timeout longer than remaining time slice is set. This is cause
because z_add_timeout does not check for the slice ticks.
The following example spots the issue:
K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
K_THREAD_STACK_ARRAY_DEFINE(tstacks, NUM_THREAD, STACK_SIZE);
K_SEM_DEFINE(sema, 0, NUM_THREAD);
static inline void spin_for_ms(int ms)
{
uint32_t t32 = k_uptime_get_32();
while (k_uptime_get_32() - t32 < ms) {
}
}
static void thread_time_slice(void *p1, void *p2, void *p3)
{
printk("thread[%d] - Before spin\n", (int)(uintptr_t)p1);
/* Spinning for longer than slice */
spin_for_ms(SLICE_SIZE + 20);
/* The following print should not happen before another
* same priority thread starts.
*/
printk("thread[%d] - After spinning\n", (int)(uintptr_t)p1);
k_sem_give(&sema);
}
void main(void)
{
k_tid_t tid[NUM_THREAD];
struct k_thread t[NUM_THREAD];
uint32_t slice_ticks = k_ms_to_ticks_ceil32(SLICE_SIZE);
int old_prio = k_thread_priority_get(k_current_get());
/* disable timeslice */
k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
for (int j = 0; j < 2; j++) {
k_sem_reset(&sema);
/* update priority for current thread */
k_thread_priority_set(k_current_get(), K_PRIO_PREEMPT(j));
/* synchronize to tick boundary */
k_usleep(1);
/* create delayed threads with equal preemptive priority */
for (int i = 0; i < NUM_THREAD; i++) {
tid[i] = k_thread_create(&t[i], tstacks[i], STACK_SIZE,
thread_time_slice, (void *)i, NULL,
NULL, K_PRIO_PREEMPT(j), 0,
K_NO_WAIT);
}
/* enable time slice (and reset the counter!) */
k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));
/* Spins for while to spend this thread time but not longer */
/* than a slice. This is important */
spin_for_ms(100);
printk("before sleep\n");
/* relinquish CPU and wait for each thread to complete */
k_sleep(K_TICKS(slice_ticks * (NUM_THREAD + 1)));
for (int i = 0; i < NUM_THREAD; i++) {
k_sem_take(&sema, K_FOREVER);
}
/* test case teardown */
for (int i = 0; i < NUM_THREAD; i++) {
k_thread_abort(tid[i]);
}
/* disable time slice */
k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
}
k_thread_priority_set(k_current_get(), old_prio);
}
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.
Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The strategy used in z_heap_aligned_alloc() was to allocate an extra
align-sized memory block for storing a pointer to the memory heap.
This is wasteful in terms of memory usage when alignment is larger
than a pointer width. A loop is needed to find the initial memory
start when freeing it which isn't optimal either.
Instead, let's have sys_heap_aligned_alloc() rewind a pointer after
it is aligned to make just enough room for storing our heap reference.
This way the heap reference is always located immediately before the
aligned memory and any unused memory is returned to the heap.
The rewind and alignment values may coincide in which case only
the alignment is necessary anyway.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Remove conditionals (PM_DEEP_SLEEP_STATES and PM_SLEEP_STATES) from
power management code. Now these features are always available when
power management is enabled.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Migrate the whole pm subsystem to use new power states information
from power_state.h and get states and residency properties from
device tree.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The internal API to measure time until a delay expires does not modify
the referenced timeout. Make the functions that call it take pointers
to const objects, so that they can be used with pointer to
const-qualified containers.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree. The
device dependency arrays are updated to use these handles.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().
This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
register_event always returns 0, so the conditional will
always take the first branch and code in the else part
is never reached.
Fixes#31282
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
First, the maximum heap size must fit in 31 bits worth of chunks
because the internal 32-bit field holding the size is shared with
the `used` bit.
Then the mention of a 256-byte block in the doc is no longer
relevant. That pertained to the previous allocator implementation.
And ditto for the HEAP_MEM_POOL_MIN_SIZE kconfig option.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Needing to check the current cycle time (which involves a spinlock and
register read on most architectures) is wasteful in the scheduler
priority predicate, which is a hot path. If we "burn" one bit of
precision (and document the rule), we can do the comparison without
knowing the current time.
2^31 cycles is still far longer than a live deadline thread in any
legitimate realtime app should ever live before being scheduled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adds a linker section for Cortex-M instruction tightly coupled memory
(ITCM), similar to the existing section for DTCM. A new executable MPU
region is not added as there isn't currently a need to make this section
accessible to user mode. This section can be enabled by setting a device
tree chosen node zephyr,itcm.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
This allows allocating dynamic kernel objects with memory alignment
requirements. The first candidate is for thread objects where,
on some architectures, it must be aligned for saving/restoring
registers.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
PM depends on SYS_CLOCK_EXISTS in Kconfig but several boards have
Kconfig overrides that allow the dependency to be ignored, so
CONFIG_PM=y even though CONFIG_SYS_CLOCK_EXISTS=n. Fix the code so
that the true dependency is reflected in the generated code.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This change adds z_heap_aligned_alloc() and k_aligned_alloc()
and changes z_heap_malloc() and k_malloc() to be small wrappers around
the aligned variants.
Fixes#29519
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
Ticks should be assigned directly to timeout value in case of
CONFIG_LEGACY_TIMEOUT_API=y, just as they were before referenced patch.
Fixes: 7a815d5d99 ("kernel: sched: Use k_ticks_t in z_tick_sleep")
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Inside the idle loop, in some configuration, IRQ is unlocked and
then immediately locked again. There is a side effect:
1. IRQ is unlocked in middle of the loop.
2. Another thread (A) can now run so idle thread is un-scheduled.
3. Thread A runs to its end and going through the thread
self-abort path.
4. Idle thread is rescheduled again, and continues to run
the remaining loop when it eventuall calls k_cpu_idle().
The "pending abort" path is not being executed on thread A
at this point.
5. Now, thread A is suspended, and the CPU is in idle waiting
for interrupts (e.g. timeouts).
6. Thread B is waiting to join on thread A. Since thread A has
not been terminated yet so thread B is waiting until
the idle thread runs again and starts executing from
the beginning of while loop.
7. Depending on how many threads are running and how active
the platform is, idle thread may not run again for a while,
resulting in thread B appearing to be stuck.
To avoid this situation, the unlock/lock pair in middle of
the loop is removed so no rescheduling can be done mid-loop.
When there is no thread abort pending, it simply locks IRQ
and calls k_cpu_idle(). This is almost identical to the idle
loop before the thread abort code was introduced (except
the check for cpu->pending_abort).
Fixes#30573
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In order to release irq_offload semaphore outside kernel/thread.c, we
make it visible by modifying it non-static under ztest. This would be
needed such as when call irq_offload() to enter interrupt context and
a fatal error happened, then you have to release it in your fatal
handler, or the irq_offload will still be locked and no longer be
using again.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Cleanup code for power management and remove some duplication and
isolate power management code from the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap did not have an aligned alloc function, even though
this is supported by the internal sys_heap.
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
These implemented a k_mem_pool in terms of the now universal k_heap
utility. That's no longer necessary now that the k_mem_pool API has
been removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mailbox and msgq utilities had API variants that could pass old
mem_pool blocks through the data structure. That API is being
deprected (and the features were obscure), so remove the internal
support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_mem_pool allocator is no more, and the z_mem_pool compatibility
API is going away. The internal allocator should be a k_heap always.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These were implemented in terms of the mem_pool/block API directly
(for complicated reasons, the pointers returned from this API may have
been allocated from allocators other than the single system heap).
Have them use a k_heap instead.
Requires a tweak to one test which had hard-coded an assumption about
the header size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove the MEM_POOL_HEAP_BACKEND kconfig, treating it as true always.
Now the legacy mem_pool cannot be enabled and all usage uses the
k_heap/sys_heap backend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Changed algorithm of checking for pending thread in mem_slab.
Check for pending thread now is done only if there is
no memory left.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
z_tick_sleep was using int32_t what could cause a possible overflow
when converting from k_ticks_t.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch activates FPU feature for main thread if FPU related
configs (FPU, FPU_SHARING) are enabled.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
With MMU features enabled, we are using 248 out of 256
available bytes on 32-bit. This is extremely uncomfortable, relax
to a larger value like several other arches.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds a K_DELAYED_WORK_DEFINE, matching the K_WORK_DEFINE macro, with
accompanying Z_DELAYED_WORK_INITIALIZER macro.
Makes k_delayed_work_init a static inline function, like its K_WORK
counterpart.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
Move banner and boot delay handling out of init.c
The code for banner was all over the place in init.c making it
unreadable.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add new function to mem_slab API that enables user
to get maximum number of slabs used so far.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This uses the timing functions to gather execution cycles of
threads. This provides greater details if arch/SoC/board
uses timer with higher resolution.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to gather the first thread runtime statictic:
thread execution time. It provides a rough idea of how much time
a thread is spent in active execution. Currently it is not being
used, pending following commits where it combines with the trace
points on context switch as they instrument the same locations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Documentation for kconfig SYS_CLOCK_TICKS_PER_SEC has some outdated
recommendations. Changing them to align with other documentation
under kernel timing.
Fixes: #25482
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
This legacy struct still had a non-standard name. Clean it up to
conform to currrent naming guidelines.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix the issue where the kernel poll code would place the tracking
struct on the caller stack and share it with other threads, thus
creating a cache coherence issue on systems where KERNEL_COHERENCE is
enabled.
This works by eliminating the thread backpointer in struct _poller and
simply placing the (now just two-byte!) struct directly into the
thread struct.
Note that this doesn't attempt to fix the API paradigm that the
natural way to structure a call to k_poll() is to use an array of
k_poll_events on the CALLER's stack. So it's likely that most
"typical" k_poll code is still going to have problems with
KERNEL_COHERENCE. But at least now the kernel internals aren't
fundamentally broken.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The poll code was playing this weird trick where the thread pointer in
the "struct _poller" object for a triggered work item. It would not
be a thread to wake up, but instead a pointer to the (non-polling)
thread operated by the work queue being triggered. The code would
never touch this thread, just use it as a way to get a pointer to the
enclosing work queue struct.
Just store the work queue pointer in the first place. It's much
simpler, and makes future modifications to remove that thread pointer
possible.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The triggered work scheme uses a trick where it overloads the thread
pointer field of the struct poller (which normally stores the ID of
the thread that is blocked in k_poll()) to be able to find the work
queue to which it will submit.
Give it its own pointer field to break this false dependency.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Somewhat weirdly, k_poll() can do one of two things when the set of
events is signaled: it can wake up a sleeping thread, or it can submit
an unrelated work item to a work queue. The difference in behaviors
is currently captured by a callback, but as there are only two it's
cleaner to put this into a "mode" enumerant. That also shrinks the
size of the data such that the poller struct can be moved somewhere
other than the calling stack.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.
We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.
Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.
We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.
Platforms which just have one RAM region may continue to
identity-map it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Replaces all existing variants of value clamping with the MIN and MAX
macros with the CLAMP macro.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
k_object_alloc(K_OBJ_THREAD) returns a usable struct k_thread pointer.
This pointer is 4 byte aligned. On x86 and x86_64 struct _thread_arch
has a member which requires alignment. Since this is currently not
supported k_object_alloc(K_OBJ_THREAD) now returns an error instead of
a misaligned pointer.
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
Toolchains other than Zephyr SDK may not support generating
code with thread local storage. So limit TLS to Zephyr SDK
for now, and only enable TLS on other toolchains as needed.
Fixes#29541
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
k_queue_append and k_queue_alloc_append are in a race
condition.
Scenario:
Using either of the above mentioned functions from a
preemptive context, can lead to the element being
appended to the queue to be lost. The element loss happens
when the k_queue_append is interrupted at a critical point,
when the list only contains one element and k_queue_get is
called, before the k_queue_append is allowed to complete.
Fix:
Move sys_sflist_peek_tail inside queue_insert(). Add additional
bool paramenter to queue_insert(), to indicate append/prepend
operation.
Fixes#29257
Signed-off-by: iva kik <megatheriumiva@gmail.com>
For threads that run in supervisor mode for some time before
synchronously dropping to user mode, re-initialize the TLS
area to prevent leakage of potentially sensitive information.
We did this already for CONFIG_THREAD_USERSPACE_LOCAL_DATA
but not the new CONFIG_THREAD_LOCAL_STORAGE.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This enables storing errno in the thread local storage area.
With this enabled, a syscall to access errno can be avoided
when userspace is also enabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Note that since Cortex-M does not have the thread ID or
process ID register needed to store TLS pointer at runtime
for toolchain to access thread data, a global variable is
used instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the common struct fields and functions to support
the implementation of thread local storage in individual
architecture. This uses the thread stack to store TLS data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add kconfigs to indicate whether an architecture has support
for thread local storage (TLS), and to enable TLS in kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Arches which have custom swap to main can have _current be
NULL very, very early in the boot process. Check this to
avoid an infinite loop of fatal errors.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel does not use this yet, but it will be later used
as part of infrastructure for memory-mapping stacks, as detailed
in #28899.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In the MMU code mapping_pos is miscalculated when for example
SRAM_BASE_ADDRESS==0x40000000 and KERNEL_VM_SIZE==0xc0000000 getting a
mapping_pos of 0x0.
The problem is that we must cast the two values to uintptr_t before
casting the result to avoid the rollover to 0.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For compatibility layers like CMSIS where thread objects
are drawn from a pool, provide a context pointer to the
exited thread object so it may be freed.
This is somewhat obscure and has no supporting APIs or
overview documentation and should be considered a private
kernel feature. Applications should really be using
k_thread_join() instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we call k_mem_domain_add_thread() to a memory domain
the thread already belongs to, do nothing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>