The wording on deprecating arch_kernel_init() in favor of prep_c()
has never been materialized. Various architectures are using it to
perform initialization. So remove the wording.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Assert that the handler of a work is not NULL when submitting
it to the queue. This allows early detection of the
code that is submitting a non-NULL work with NULL handler to
the work queue (where it happens), rather than right before the
work item get executed in the queue (when it happens).
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Zephyr's code base uses MP_MAX_NUM_CPUS to
know how many cores exists in the target. It is
also expected that both symbols MP_MAX_NUM_CPUS
and MP_NUM_CPUS have the same value, so lets
just use MP_MAX_NUM_CPUS and simplify it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Integrates object core statistics framework into the following
kernel objects:
sys_mem_blocks, k_mem_slab
threads, _cpu, z_kernel
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Refactors CPU usage (thread runtime stats) to make it easier to
integrate with the object core statistics framework.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Rearranges the k_mem_slab fields so that information that describes
how much of the memory slab is used is co-located. This will allow
easier of its statistics into the object core statistics reporting
framework.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
We don't need re-implement a function to get the current cpu.
Simply use _current_cpu that even contains additional sanity checks.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Running inside kernel we can use _current instead of
k_current_get that can lead to additional function call
checks.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This adds a function k_object_is_valid() to check if a kernel
object exists, of certain type, and has been initialized.
This replaces the same (or very similar) code that has been
copied from kernel into the network subsystem.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The original idea of z_current_get() was to be the counterpart
of k_current_get() when thread local variable for current has
not been initialized if TLS is enabled, otherwise they are
the same function. Now since z_current_get() is being used
outside of core kernel, rename it under kernel namespace so
other subsystem can conceptually use them too.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Storing this value in milliseconds rather than using k_timeout_t requires
the system to perform division at runtime to convert types. This pulls in
the 64-bit soft division code on platforms without hardware for this.
Perform the conversion at build time instead by using the runtime time
directly.
The init_delay field was moved within the _static_thread_data structure to
avoid introducing a hole for alignment on 32-bit systems when using 64-bit
timeouts.
Use SYS_TIMEOUT_MS instead of K_MSEC so that the initial delay can be set
to forever.
Signed-off-by: Keith Packard <keithp@keithp.com>
Previously we limit maximum number of CPU cores to 5, now be
bumping this restriction so we can use 12 cores.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
The signal_poll_event function was previously called without the poll
lock held. This created a race condition between a thread calling k_poll
to wait for an event and another thread signalling for this same event.
This resulted in the waiting thread to stay pending and the handle to it
getting removed from the notifyq, meaning it couldn't get woken up
again.
Signed-off-by: Ambroise Vincent <ambroise.vincent@arm.com>
This internal kernel API is misplaced in a public kernel header. Just
make it available to the code using it in the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The _EXPIRED macro is no longer necessary. It is a relic of an older
timeout processing algorithm from several years ago.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This is a private kernel header with private kernel APIs, it should not
be exposed in the public zephyr include directory.
Once sample remains to be fixed (metairq_dispatch), which currently uses
private APIs from that header, it should not be the case.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This header does not expose any public APIs, so move it under
kernel/include and change files including it.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add a missing assert argument, fixes:
zephyrproject/zephyr/kernel/dynamic.c: In function 'dyn_cb':
zephyrproject/zephyr/include/zephyr/sys/__assert.h:44:52: warning:
format '%p' expects a matching 'void *' argument [-Wformat=]
That started to break the build since:
d7846de548 assert: check format arguments for correctness
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
Add an assert to ensure the pointer provided by the user points to one
of the available blocks in the slab.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Modify the signature of the k_mem_slab_free() function with a new one,
replacing the old void **mem with void *mem as a parameter.
The following function:
void k_mem_slab_free(struct k_mem_slab *slab, void **mem);
has the wrong signature. mem is only used as a regular pointer, so there
is no need to use a double-pointer. The correct signature should be:
void k_mem_slab_free(struct k_mem_slab *slab, void *mem);
The issue with the current signature, although functional, is that it is
extremely confusing. I myself, a veteran Zephyr developer, was confused
by this parameter when looking at it recently.
All in-tree uses of the function have been adapted.
Fixes#61888.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Combining Meta IRQs with cooperative threads requires extra care to
return to pre-empted cooperative threads when returning from a Meta IRQ.
This is only needed when there are cooperative threads that are not also
Meta IRQs. This PR saves some space & time when the number of Meta IRQs
is equal to the number of available cooperative threads.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
CONFIG_COVERAGE has been incorrectly used to
change other kconfig options (stack sizes, etc)
code defaults, as well as some samples behaviour,
which should not have dependend on it.
Instead those should have depended on COVERAGE_GCOV,
which, being the one which adds special code and
temporary RAM storage for embedded targets,
require changes to many features.
When building for the native targets, all this was
unnecessary.
=> Fix the dependency.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
This enables -Wshadow to warn about shadow variables on
in tree code under arch/, boards/, drivers/, kernel/,
lib/, soc/, and subsys/.
Note that this does not enable it globally because
out-of-tree modules will probably take some time to fix
(or not at all depending on the project), and it would be
great to avoid introduction of any new shadow variables
in the meantime.
Also note that this tries to be done in a minimally
invasive way so it is easy to revert when we enable
-Wshadow globally. Source files under modules/, samples/
and tests/ are currently excluded because there does not
seem to be a trivial way to add -Wshadow there without
going through all CMakeLists.txt to add the option
(as there are 1000+ files to change).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows for further (out of tree) customisation of the boot
banner version string when devices boot.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
When `CONFIG_FPU_SHARING` is enabled each `k_thread` struct has a saved
floating point context (`saved_fp_context`). During a context switch, the
current FPU owner's (`_current_cpu->arch.fpu_owner`) registers are saved
to its `saved_fp_context`, and the destination threads FPU registers are
loaded from its `saved_fp_context`.
When a thread ends, it does not release ownership of the FPU
(`_current_cpu->arch.fpu_owner`). This is problematic if the `k_thread`
struct was allocated on the stack. The next context switch will save the
FPU registers into `k_thread -> saved_fp_context` which may now be out of
scope. This will likely (but not always) result in a crash.
Adding `arch_float_disable(thread);` when a thread ends disables
preservation of floating point context information, fixing this issue
Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
When CONFIG_KERNEL_DIRECT_MAP enabled, the region to be mapped
or unmapped can be outside of the virtual memory space, wholly
within it, or overlap partially. Additional processing is
needed to make sure we only manipulate the bits within
the bitmap, in other words, only the pages represented by
the bitmap.
Fixes#59549
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add new option to use thread local storage for stack
canaries. This makes harder to find the canaries location
and value. This is made optional because there is
a performance and size penalty when using it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
In commit d537267f, the check on thread abortion was moved from next_up
to z_get_next_switch_handle. However, next_up is also called from
z_swap_next_thread, so the check on thread abortion is now missing there.
This sometimes caused the thread to be stuck in ABORTING + PENDING state
during the test_smp_switch_torture in test/kernel/smp
To avoid such cases in the future, it is worth leaving the check in next_up
Signed-off-by: Vadim Shakirov <vadim.shakirov@syntacore.com>
This is meant as a substitute for sys_clock_timeout_end_calc()
Current sys_clock_timeout_end_calc() usage opens up many bug
possibilities due to the actual timeout evaluation's open-coded nature.
Issue ##50611 is one example.
- Some users store the returned value in a signed variable, others in
an unsigned one, making the comparison with UINT64_MAX (corresponding
to K_FOREVER) wrong in the signed case.
- Some users compute the difference and store that in a signed variable
to compare against 0 which still doesn't work with K_FOREVER. And when
this difference is used as a timeout argument then the K_FOREVER
nature of the timeout is lost.
- Some users complexify their code by special-casing K_NO_WAIT and
K_FOREVER inline which is bad for both code readability and binary
size.
Let's introduce a better abstraction to deal with absolute timepoints
with an opaque type to be used with a well-defined API.
The word "timeout" was avoided in the naming on purpose as the timeout
namespace is quite crowded already and it is preferable to make a
distinction between relative time periods (timeouts) and absolute time
values (timepoints).
A few stacks are also adjusted as they were too tight on X86.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
With some of the recent work to disable unnecessary system
calls, there is a scenario where `z_impl_k_thread_stack_free()`
is not defined and an undefined symbol error occurs.
Safety was very concerned that dynamic thread stack code might
touch other code that does not malloc, so add a separate file
for the stack alloc and free stubs.
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
This allows for builds with CONFIG_SYS_CLOCK_EXISTS=n in which case
busy waits are achieved with a crude CPU loop. If ever accuracy is
needed even with such a configuration then implementing arch_busy_wait()
should be considered.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Since the rbtree is using as list because we no longer
can assume that the object pointer is the address of the
data field in the dynamic object struct, lets just use
the already existent dlist for tracking dynamic kernel
objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Fix the preference allocation logic. If pool is preferred but POOL_SIZE
is 0 or pool allocation fails, it fallbacks to heap allocation if it
is enabled.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add support for dynamic thread stack objects. A new container
for this kernel object was added to avoid its alignment constraint
to all dynamic objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a new API to dynamically allocate kernel objects that allow
passing an arbitrary size. This new API allows to allocate dynamic
thread stack.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Update the return value of functions that modify the internal event
state from `void` to `uint32_t`, so that calling code can determine
whether the event was already in a given state, or if the call modified
it.
This simplifies the usage of `struct k_event` as an alternative to
`atomic_t` that users can block on.
Implements #57216
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.
This change documents and explains the rationale of this distinction.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early. That's not sufficient. The thread state is mutated
from the thread itself on its exit path. It may still be running!
Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.
There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.
Fixes#58116
Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation. We have an API for
that now, so put them in:
* The code immediately before arch_switch() needs a write barrier to
ensure that thread state written by the scheduler is seen to happen
before the outgoing thread is flagged with a valid switch handle.
* The loop in z_sched_switch_spin() needs a read barrier at the end,
to make sure the calling context doesn't load state from before the
other CPU stored the switch handle.
Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.
Signed-off-by: Andy Ross <andyross@google.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().
Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.
The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context. For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.
But this check was being done AFTER the next candidate thread was
selected from the run queue. Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.
Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).
Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path). Simple fix, subtle bug.
Fixes#58040
Signed-off-by: Andy Ross <andyross@google.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.
Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).
This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.
Additionally, add asserts to make sure this can't happen again.
Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When building sample.minimal.mt-no-preempt-no-timers.arm on arm-clang
we get a link error as z_pm_save_idle_exit expects sys_clock_idle_exit
to be defined.
However the sample sets CONFIG_SYS_CLOCK_EXISTS=n so
sys_clock_idle_exit() will not be defined by any driver. So add proper
ifdef protection in z_pm_save_idle_exit to fix this.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
When a semaphore is given and there is no thread waiting
for it, do not unconditionally perform a reschedule.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some devices do not need to perform any initialization, so allow the
init function to be NULL. In this case, the initialization code will
just mark the device as initialized, i.e. ready.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc.
Most of these changes were automated using coccinelle with the following
script:
@@
@@
- void
+ int
main(...) {
...
- return;
+ return 0;
...
}
Approximately 40 files had to be edited by hand as coccinelle was unable to
fix them.
Signed-off-by: Keith Packard <keithp@keithp.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc
Signed-off-by: Keith Packard <keithp@keithp.com>
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Add check to ensure that CONFIG_MP_NUM_CPUS and CONFIG_MP_MAX_NUM_CPUS
are set the same. This will at least cause a build issue for out of
tree users.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.
Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.
In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the `zephyr,pm-device-runtime-auto` flag to `pm.yaml` and
`struct pm_device`.
This flag is intended to signify to the boot system that device runtime
PM should be automatically enabled on the device after the init function
has run.
Only run `pm_device_runtime_auto_enable` function on a device if
initialisation succeeded. This prevents actions being run on devices
that are not ready.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Fixes race condition for k_event_post_internal() in an
SMP environment while walking the waitq. Uses z_sched_waitq_walk()
to safely walk the waitq by using a sched_spinlock.
It should be noted that since walking the wait queue is an
operation of indeterminant length, there exists the possibility
that the sched_spinlock (which is a highly used and contended-for
lock) may be locked for an indeterminant amount of time. However,
it is expected that few threads will be waiting on any given kernel
event object, which should ameliorate this risk.
Fixes#54317
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU. There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.
Signed-off-by: Andy Ross <andyross@google.com>
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks. The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.
This simplifies logic and removes a bunch of code. It also fixes at
least three bugs:
1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks. This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.
2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop. Now this code is no
longer responsible for knowing anything about time slicing at all.
3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was. CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU. Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.
Signed-off-by: Andy Ross <andyross@google.com>
Some of the offset symbols that are derived from the macro
GEN_OFFSET_SYM() are not used anywhere in the Zephyr codebase.
Remove them as part of a cleanup effort.
Instances of an associated GEN_OFFSET_SYM() have also been
removed when the resulting macro is no longer referenced.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some of the offset symbols generated via the macro GEN_OFFSET_SYM()
are not used anywhere in the Zephyr codebase. Remove them as part of
a cleanup effort.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Commit 3e729b2b1c ("kernel/timer: Correctly clamp period argument")
increased the lower limit to 1 so that it wouldn't conflict with a
K_NO_WAIT. But in doing so it enforced a minimum period of 2 ticks.
And the subtraction must obviously be avoided if the period is zero, etc.
Instead of doing this masquerade in k_timer_start(), let's move the
subtraction and clamping in z_timer_expiration_handler() right before
registering a new timeout. It makes the code cleaner, and then it is
possible to have single-tick periods again.
Whith this, timer_jitter_drift in tests/kernel/timer/timer_behavior does
pass with any CONFIG_SYS_CLOCK_TICKS_PER_SEC value, even when the tick
period is equal or larger than the specified timer period for the test
which failed the test before.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The call to unschedule_locked() would return true ("successfully
unscheduled") even in the case where the underlying z_abort_timeout()
failed (because the callback was already unpended and
in-progress/complete/about-to-be-run, remember that timeout callbacks
are unsynchronized), leading to state bugs and races against the
callback behavior.
Correctly detect that case and propagate the error to the caller.
Fixes#51872
Signed-off-by: Andy Ross <andyross@google.com>
Fixes sporadic data access violations that were occuring when pipes
were being used from an ISR. The ISR was incorrectly using the pipe
descriptor belonging to the interrupted thread. This led to corrupted
pipe meta-data. The solution proposed here is to perform a run-time
check and if use a pipe descriptor on the ISR's stack if called from
an ISR.
For additional information, see:
https://github.com/zephyrproject-rtos/zephyr/issues/52812
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a spin lock/unlock barrier pair after a pipe thread wakes.
After the list of waiting threads is generated, it is possible for
threads on that list to timeout and be removed from the wait queue.
However, since that list was generated before the timeout occurred,
the timed-out thread must wait until the copying is done (the
pipe's spin-lock has been released).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
By the time the working list of readers/writers is processed, it is
possible that waiting reader/writer being processed had timed out
and is no longer on the wait queue. As such, we can not blindly
wake the next thread as that next thread might not be the thread we
had just been processing.
To address this, the calls to z_sched_wake() have been replaced
with z_unpend_thread() and z_ready_thread() so that a specific
thread can be safely targeted for waking.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Uses the new z_sched_waitq_walk() routine to walk the pipe's wait
queue to build a list of waiting threads that will be used for
the data transfer.
This method is preferred over the previous as it ensures that
wait queue is safely traversed.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When a timer is restarted from a high priority interrupt, it may
happen that the timer is re-added to the timeout list right after
it is removed from that list prior to execution of its expiration
handler but before that execution actually occurs. This leads to
an assertion failure reported for `z_add_timeout()` because then
that function, called from `z_timer_expiration_handler()` for
periodic timers, turns out to be adding a timeout that is already
added to the timeout list.
This commit detects such situation in `z_timer_expiration_handler()`
and makes that function exit immediately when that occurs (as the
timer was restared, its expiration handler should not be executed).
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
Most of the time, z_cstart() is running on an arbitrary region
of memory as stack, where the necessary stack setup has not been
performed. This prevents stack protection to work correctly,
as the stack canary has not been populated. So mark z_cstart()
to have no stack protection at all inside the function to avoid
raising exception during boot.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This commit updates all in-tree code to use `CONFIG_CPP` instead of
`CONFIG_CPLUSPLUS`, which is now deprecated.
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
At least one static analysis tool is flagging a potential NULL
derefence in sys_clock_announce()'s tick processing loop where the
routine 'first()' is concerned. In practice, this does not occur as
...
1. The code in question is protected by a spinlock.
2. 'first()' does not change the contents of anything.
The code has consequently been tweaked to prevent similar such false
positives in the future.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Accurate timekeeping is something that is often taken for granted.
However, reliability of timekeeping code is critical for most core
and subsystem code. Furthermore, Many higher-level timekeeping
utilities in Zephyr work off of ticks but there is no way to modify
ticks directly which would require either unnecessary delays in
test code or non-ideal compromises in test coverage.
Since timekeeping is so critical, there should be as few barriers
to testing timekeeping code as possible, while preserving
integrity of the kernel's public interface.
With this, we expose `sys_clock_tick_set()` as a system call only
when `CONFIG_ZTEST` is set, declared within the ztest framework.
Signed-off-by: Chris Friedt <cfriedt@meta.com>
The following testcases fail with qemu_cortex_r5 caused by main stack
overflow.
tests/kernel/workq/work_queue/kernel.workqueue
tests/ztest/base/testing.ztest.base.verbose_0_userspace
The main stack size is 512 for qemu_cortex_r5(a Cortex-A/R aarch32
platform) with CONFIG_ZTEST=y. The Cortex-M platforms are already set to
1024. Likely 512 will fail for most aarch32 platforms soon.
Fix the issue by increasing the CONFIG_MAIN_STACK_SIZE to 1024.
Also, remove 'default 1024 if TEST_ARM_CORTEX_M' since Cortex-M is no
longer an exception of default 1024.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
- Logging supports printing 64-bit values now. Cast to unsigned long and
use %lu all times.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In z_phys_unmap(), the call to virt_region_free() is not using
aligned virtual address and space. This can result in freeing
smaller region that allocated given that inputs to z_phys_unmap()
may not be aligned. So use the already calculated aligned
virtual address and size as input to virt_region_free().
Note that the assertion and if-block in virt_region_free() to
check whether the to-be-unmapped region is within the virtual
memory region needs to be trimmed by one byte at the end.
The assertion and if-block are checking against the region
end address but (start + size) is just one byte over the end.
So subtract one.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The C++ standard requires the main() function to have the return type
of 'int' and does not allow the main() to be defined with the 'void'
return type. Moreover, GCC goes as far as to emit a hard error when the
'::main()' has the return type of `void`.
This commit introduces an option to instruct the Zephyr kernel to call
the 'int main(void)' instead of the 'void main(void)' in case a Zephyr
application defines main() in a C++ source file.
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
Move runtime code to use arch_num_cpus() instead of CONFIG_MP_NUM_CPUS
and use CONFIG_MP_MAX_NUM_CPUS for ifdef and BUILD_ASSERT macros.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Cleanup the mess of duplicate function definitions, unnecessary
variables and duplicate strings. All banner strings are now constant in
ROM. Also fixes a double space between the end of the version string and
the trailing `***` when there is no boot delay.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
The BOOT_DELAY option does nothing in code if MULTITHREADING is not
enabled. Move the dependency to Kconfig instead.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Continue to phase out MP_NUM_CPUS, change Kconfig to be
MP_MAX_NUM_CPUS and make MP_MAX_NUM_CPUS the main Kconfig symbol.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
The dummy thread doesn't include a TLS area, so any thread local variables
will fail to work if used in the switched_out tracing hook. Skip the hook
in this case, as it's not really accurate anyways; the dummy thread is
only used to set context for the initial switch for each core.
Signed-off-by: Keith Packard <keithp@keithp.com>
Using char pointers for %p should be avoided in log messages. It will
cause issues in configurations where logging strings are removed from
the binary and they are not inspected when cbprintf packages from
logging string are built. In that case any char pointers are treated as
strings and copied into the pacakge body.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Introduce a Kconfig (MP_MAX_NUM_CPUS) and an api arch_num_cpus() to
allow for systems that might determine the number of CPUs available to
Zephyr at runtime.
CONFIG_MP_MAX_NUM_CPUS is intented to be use for any array initialization
and such that need to occur at build time. For most systems
arch_num_cpus() will just report the value of CONFIG_MP_MAX_NUM_CPUS.
The intent is to phase out CONFIG_NP_NUM_CPUS.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Warnings being treated as errors when building :
Error this 'for' clause does not guard...
[-Werror=misleading-indentation]
Signed-off-by: Francois Ramu <francois.ramu@st.com>
The _SYS_INIT_LEVEL* definitions were used to indicate the index entry
into the levels array defined in init.c (z_sys_init_run_level). init.c
uses this information internally, so there is no point in exposing this
in a public header. It has been replaced with an enum inside init.c. The
device shell was re-using the same defines to index its own array. This
is a fragile design, the shell needs to be responsible of its own data
indexing. A similar situation happened with some unit tests.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The function in charge of calling all init function was defined in
device.c, had a public prototype and was just used in init.c. Since this
is really an internal function tied to Kernel init code, move it to
init.c and make it static, there's no need to expose it publicly.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The `ARCH` init level was added to solve a specific problem, call init
code (SYS_INIT/devices) before `z_cstart` in the `intel_adsp` platform.
The documentation claims it runs before `z_cstart`, but this is only
true if the SoC/arch takes care of calling:
```c
z_sys_init_run_level(_SYS_INIT_LEVEL_ARCH);
```
Which is only true for `intel_adsp` nowadays. So in practice, we now
have a platform specific init level. This patch proposes to do things in
a slightly different way. First, level name is renamed to `EARLY`, to
emphasize it runs in the early stage of the boot process. Then, it is
handled by the Kernel (inside `z_cstart()` before calling
`arch_kernel_init()`). This means that any platform can now use this
level. For `intel_adsp`, there should be no changes, other than
`gcov_static_init()` will be called before (I assume this will allow to
obtain coverage for code called in EARLY?).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich). This
process happens with the caller's lock held, so local interrupts are
masked. But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!
Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also. Now we hold the scheduler lock between pend and
the final context switch.
Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.
[1] z_swap() is a historical API implemented in per-arch assembly for
older architectures (like ARM32!). It was designed to be called
with what at the time was a global IRQ lock, so it doesn't
understand the idea of a separate scheduler lock. When we finally
get all archictures on arch_switch() this design can be cleaned up
quite a bit.
Signed-off-by: Andy Ross <andyross@google.com>
We have cases where some devices needs to be initialized very early and
before c_start is call, i.e. to setup very early console or to setup
memory. Traditionally this would be hardcoded as part of the soc layer
and not using device model or the init levels.
This patch adds a new level ARCH, which will be called in early
architecture code and before we jump to the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap_aligned_alloc was not handling K_FOREVER timeout
correctly due to unsigned return value. Added explicit
K_FOREVER handling of end time.
Fixes#50611.
Signed-off-by: Jay Shoen <jay.shoen@perceive.io>
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.
CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.
Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.
Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Obtaining the CPU outside of the spin locks on SMP would
result in an assert failing on __ASSERT(!z_smp_mobile())
which makes sense as the current cpu may change.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
The requirement for k_yield() to handle "yielding" in the idle thread
was removed a while back, but it missed a spot where we'd try to yield
in the fallback loop on bringup platforms that lack an IPI. This now
crashes, because yield now unconditionally tries to reschedule the
current thread, which doesn't work for idle threads that don't live in
the run queue.
Just make it a busy loop calling swap(), even simpler.
Fixes#50119
Signed-off-by: Andy Ross <andyross@google.com>
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY=y, CPU mask
is fixed and cannot be changed while thread is running.
The current code asserts if thread state is anything but PREPARED.
We do however have interface like k_work_queue_start() where a thread is
started as part of the queue start. To allow user to set the pinned CPU
for the work queue thread, it needs to be possible to suspend the
thread, set the mask, and then call k_thread_resume(). This seems to be
a valid sequence, so relax the assert check to reflect this.
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
k_poll does not currently allow polling on pipes. This adds support
for doing so on buffered pipes.
Signed-off-by: Jeremy Herbert <jeremy.006@gmail.com>
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:
- the location of the cache handlers is preventing the linker to
actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
implemented (when userspace is not used the compiler simply optimizes
out these calls)
Fix the problem by:
- moving the userspace cache handlers to a their logical and proper
location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Many device pointers are initialized at compile and never changed. This
means that the device pointer can be constified (immutable).
Automated using:
```
perl -i -pe 's/const struct device \*(?!const)(.*)= DEVICE/const struct
device *const $1= DEVICE/g' **/*.c
```
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There's no point to doing this when the host OS clears all memory at
mapping time. And as it turns out, the __bss_end symbol it was
relying on actually comes from the host toolchain's linker, not our
own linker scripts (making it semi-dangerous to rely on). And it's
not present in clang/lld output anyway.
Signed-off-by: Andy Ross <andyross@google.com>
This new implementation of pipes has a number of advantages over the
previous.
1. The schedule locking is eliminated both making it safer for SMP
and allowing for pipes to be used from ISR context.
2. The code used to be structured to have separate code for copying
to/from a wating thread's buffer and the pipe buffer. This had
unnecessary duplication that has been replaced with a simpler
scatter-gather copy model.
3. The manner in which the "working list" is generated has also been
simplified. It no longer tries to use the thread's queuing node.
Instead, the k_pipe_desc structure (whose instances are on the
part of the k_thread structure) has been extended to contain
additional fields including a node for use with a linked list. As
this impacts the k_thread structure, pipes are now configurable
in the kernel via CONFIG_PIPES.
Fixes#47061
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.
This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.
Fixes#48056
Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
Fixes#46324
Set dummy_thread->base.slice_ticks to 0 when
CONFIG_TIMESLICE_PER_THREAD is set. To avoid
_current_cpu->slice_ticks be a big number.
Signed-off-by: Hu Zhenyu <zhenyu.hu@intel.com>
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
1. The handler invokes k_uptime_ticks().
2. k_uptime_ticks() invokes sys_clock_tick_get().
3. sys_clock_tick_get() must call elapsed() and not
sys_clock_elapsed() as we do not want to count any
unannounced ticks that may have elapsed while
processing the timer ISR.
Fixes#46378
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
There is no easy way to clear event bits without
the potential for a race to exist between producer(s)
and consumer(s). The result of this race is that events
can be lost through the various resetting mechanisms
available (flag to k_event_wait(), or k_event_set()).
Add k_event_set_masked() which permits bits to be set or cleared.
This allows consumers to clear just the bits that they have read
without (accidentally) discarding any new bits.
Update unit tests to verify the functionality.
Partly Fixes#46117.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
Although there is nothing wrong with the existing code,
it doesn't permit individual bits to be set (or cleared).
This makes further changes slightly awkward.
Use a mask to restrict the bits set in an event.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci
Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
Adds memory usage runtime stats routines that parallel those used
by both the heap and mem_blocks. This helps maintain some level of
of consistency across the different memory types.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Move scripts needed by the build system and not designed to be run
individually or standalone into the build subfolder.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Update the two locations that use two `SYS_INIT` macros with the same
initilisation functions to use `SYS_INIT_NAMED`.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Add a minimal EFI console driver to support printf, this console driver
only supports console output. Otherwise the printf will not work.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
Adds compatibility with Intel ADSP GDB from Zephyr SDK and
from Cadence toolchain to coredump_gdbserver.py.
Adds CAVS 15-25 (APL) register definitions. Implements
handle_register_single_read_packet to serve ADSP GDB
p packets.
Prevents BSA from changing between stack dump printout
and coredump by taking lock. Observed to be necessary for
accurate results on slower simulated platforms.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
This commit updates all deprecated `K_KERNEL_PINNED_STACK_ARRAY_EXTERN`
macro usages to use the `K_KERNEL_PINNED_STACK_ARRAY_DECLARE` macro
instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Files including <zephyr/kernel.h> do not have to include
<zephyr/zephyr.h>, a shim to <zephyr/kernel.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the symbols used to denote the locations of the global
constructor lists and modify the Zephyr start-up code accordingly.
On POSIX systems this ensures that the native libc init code won't
find any constructors to run before Zephyr loads.
Fixes#39347, #36858
Signed-off-by: David Palchak <palchak@google.com>
Use a new environment variable,
ZEPHYR_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE, to set the value for
TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE instead of setting it to 'n' for
all non-Zephyr toolchains. In particular, the Debian arm-none-eabi
toolchain has TLS support and with this option, can be used to build
Zephyr with thread local variables.
Signed-off-by: Keith Packard <keithp@keithp.com>
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable. Do what we say we're supposed to do.
This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition. Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success". Check that we actually enter
the handler.
Fixes#45545
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The function k_thread_runtime_stats_all_get() now populates the
current_cycles field in the thread runtime stats structure.
Resets the number of cycles in the CPU's current usage window once
the idle thread is scheduled.
Fixes the average_cycles calcuation.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
For a library which already provides a multi-thread aware errno, use
that instead of creating our own internal value.
Signed-off-by: Keith Packard <keithp@keithp.com>
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().
When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.
Fixes#45267
Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This adds lazy floating point context switching. On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away. If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored. If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.
The undefined instruction handler is responsible for saving away the
floating point context if needed. If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved. Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The k_timer utility was written to assume that the kernel timeout
handler would never be delayed by more than a tick, so it can naively
reschedule the next interrupt with a simple delay.
Unfortunately real platforms have glitchy hardware and high tick
rates, and on intel_adsp we're seeing this promise being broken in
some circumstances.
It's probably not a good idea to try to plumb the timer driver
interface up into the IPC layer to do this correction, but thankfully
the existing absolute timeout API provides the tools we need (though
it does require that CONFIG_TIMEOUT_64BIT be enabled).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.
This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI. Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.
Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap). This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible. Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.
Really, this is only a half fix. Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule(). But that would take some surgery, so let's go with
the simpler cleanup first.
This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.
[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Removes an unnecessary schedule lock/unlock pair from k_mutex_unlock().
Rationale: Given that only the current thread (which would also be the
mutex owner) will be able to modify the mutex object AND that a
recursive unlock ought never trigger any reschedule (as it does not
touch the pend queue), then performing a schedule lock is not needed
prior to testing for a recursive unlock.
Furthermore, even if it is not a recursive unlock, then a schedule lock
is superfluous as the existing spinlock provides sufficient protection.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When threads are in more than one state at a time, k_thread_state_str()
returns a string that lists each of its states delimited by a '+'.
This in turn necessitates a change to the API that includes both a
pointer to the buffer to use for the string and the size of the buffer.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Add an API that clears cpu mask from a thread and sets it to a specific
CPU.
This is the equivelent of:
k_thread_cpu_mask_clear(&thread);
k_thread_cpu_mask_enable(&thread, cpu_idx);
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Instead of resizing all devices handles, we just resize devices that are
power domains. This means that a power domain has to be declared as
compatbile with "power-domain" in device tree node.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add API to add devices to a power domain in runtime. The number of
devices that can be added is defined in build time.
The script gen_handles.py will check the number defined in
`CONFIG_PM_DEVICE_POWER_DOMAIN_DYNAMIC` to resize the handles vector,
adding empty slots in the supported sector to be used later.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list). The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).
Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit. The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.
Fixes#44758
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).
When that happens, each call will independently enter the loop over
the timeout list. The access is correctly synchronized, so the list
handling is correct. But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus. That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".
That's surprising and error-prone. Don't do it. Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.
(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver. That hardware issue is still poorly understood, but this fix
is desirable regardless.)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The idle thread got an index suffix in #23536 to make it easier to
identify different idle threads on different cores. This looks out of
place on single-core devices when the idle thread is listed next to
other kernel threads, such as main.
Remove the idle thread index on single-core platforms, and replace all
references to this format in tests and documentation.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
This is an attempt at formally distinguishing and supporting the case
described in 40795 where an architecture doesn't preserve/restore the
complete thread state upon entering/exiting interrupt exception state.
This is mainly about promoting the current behavior from the accepted
workaround to a formal API specification. This workaround is currently
used on ARM64 but RISC-V requires it too.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
A reference to the entropy device can be obtained at compile time, so
avoid using device_get_binding().
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There is an API to get an specific number of random bytes. There is
no need to re-implement this logic here.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>