Fix the preference allocation logic. If pool is preferred but POOL_SIZE
is 0 or pool allocation fails, it fallbacks to heap allocation if it
is enabled.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add support for dynamic thread stack objects. A new container
for this kernel object was added to avoid its alignment constraint
to all dynamic objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add a new API to dynamically allocate kernel objects that allow
passing an arbitrary size. This new API allows to allocate dynamic
thread stack.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Update the return value of functions that modify the internal event
state from `void` to `uint32_t`, so that calling code can determine
whether the event was already in a given state, or if the call modified
it.
This simplifies the usage of `struct k_event` as an alternative to
`atomic_t` that users can block on.
Implements #57216
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.
This change documents and explains the rationale of this distinction.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early. That's not sufficient. The thread state is mutated
from the thread itself on its exit path. It may still be running!
Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.
There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.
Fixes#58116
Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation. We have an API for
that now, so put them in:
* The code immediately before arch_switch() needs a write barrier to
ensure that thread state written by the scheduler is seen to happen
before the outgoing thread is flagged with a valid switch handle.
* The loop in z_sched_switch_spin() needs a read barrier at the end,
to make sure the calling context doesn't load state from before the
other CPU stored the switch handle.
Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.
Signed-off-by: Andy Ross <andyross@google.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().
Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.
The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context. For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.
But this check was being done AFTER the next candidate thread was
selected from the run queue. Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.
Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).
Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path). Simple fix, subtle bug.
Fixes#58040
Signed-off-by: Andy Ross <andyross@google.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.
Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).
This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.
Additionally, add asserts to make sure this can't happen again.
Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When building sample.minimal.mt-no-preempt-no-timers.arm on arm-clang
we get a link error as z_pm_save_idle_exit expects sys_clock_idle_exit
to be defined.
However the sample sets CONFIG_SYS_CLOCK_EXISTS=n so
sys_clock_idle_exit() will not be defined by any driver. So add proper
ifdef protection in z_pm_save_idle_exit to fix this.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
When a semaphore is given and there is no thread waiting
for it, do not unconditionally perform a reschedule.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some devices do not need to perform any initialization, so allow the
init function to be NULL. In this case, the initialization code will
just mark the device as initialized, i.e. ready.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc.
Most of these changes were automated using coccinelle with the following
script:
@@
@@
- void
+ int
main(...) {
...
- return;
+ return 0;
...
}
Approximately 40 files had to be edited by hand as coccinelle was unable to
fix them.
Signed-off-by: Keith Packard <keithp@keithp.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc
Signed-off-by: Keith Packard <keithp@keithp.com>
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Add check to ensure that CONFIG_MP_NUM_CPUS and CONFIG_MP_MAX_NUM_CPUS
are set the same. This will at least cause a build issue for out of
tree users.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.
Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.
In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the `zephyr,pm-device-runtime-auto` flag to `pm.yaml` and
`struct pm_device`.
This flag is intended to signify to the boot system that device runtime
PM should be automatically enabled on the device after the init function
has run.
Only run `pm_device_runtime_auto_enable` function on a device if
initialisation succeeded. This prevents actions being run on devices
that are not ready.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Fixes race condition for k_event_post_internal() in an
SMP environment while walking the waitq. Uses z_sched_waitq_walk()
to safely walk the waitq by using a sched_spinlock.
It should be noted that since walking the wait queue is an
operation of indeterminant length, there exists the possibility
that the sched_spinlock (which is a highly used and contended-for
lock) may be locked for an indeterminant amount of time. However,
it is expected that few threads will be waiting on any given kernel
event object, which should ameliorate this risk.
Fixes#54317
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU. There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.
Signed-off-by: Andy Ross <andyross@google.com>
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks. The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.
This simplifies logic and removes a bunch of code. It also fixes at
least three bugs:
1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks. This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.
2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop. Now this code is no
longer responsible for knowing anything about time slicing at all.
3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was. CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU. Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.
Signed-off-by: Andy Ross <andyross@google.com>
Some of the offset symbols that are derived from the macro
GEN_OFFSET_SYM() are not used anywhere in the Zephyr codebase.
Remove them as part of a cleanup effort.
Instances of an associated GEN_OFFSET_SYM() have also been
removed when the resulting macro is no longer referenced.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some of the offset symbols generated via the macro GEN_OFFSET_SYM()
are not used anywhere in the Zephyr codebase. Remove them as part of
a cleanup effort.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Commit 3e729b2b1c ("kernel/timer: Correctly clamp period argument")
increased the lower limit to 1 so that it wouldn't conflict with a
K_NO_WAIT. But in doing so it enforced a minimum period of 2 ticks.
And the subtraction must obviously be avoided if the period is zero, etc.
Instead of doing this masquerade in k_timer_start(), let's move the
subtraction and clamping in z_timer_expiration_handler() right before
registering a new timeout. It makes the code cleaner, and then it is
possible to have single-tick periods again.
Whith this, timer_jitter_drift in tests/kernel/timer/timer_behavior does
pass with any CONFIG_SYS_CLOCK_TICKS_PER_SEC value, even when the tick
period is equal or larger than the specified timer period for the test
which failed the test before.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The call to unschedule_locked() would return true ("successfully
unscheduled") even in the case where the underlying z_abort_timeout()
failed (because the callback was already unpended and
in-progress/complete/about-to-be-run, remember that timeout callbacks
are unsynchronized), leading to state bugs and races against the
callback behavior.
Correctly detect that case and propagate the error to the caller.
Fixes#51872
Signed-off-by: Andy Ross <andyross@google.com>
Fixes sporadic data access violations that were occuring when pipes
were being used from an ISR. The ISR was incorrectly using the pipe
descriptor belonging to the interrupted thread. This led to corrupted
pipe meta-data. The solution proposed here is to perform a run-time
check and if use a pipe descriptor on the ISR's stack if called from
an ISR.
For additional information, see:
https://github.com/zephyrproject-rtos/zephyr/issues/52812
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a spin lock/unlock barrier pair after a pipe thread wakes.
After the list of waiting threads is generated, it is possible for
threads on that list to timeout and be removed from the wait queue.
However, since that list was generated before the timeout occurred,
the timed-out thread must wait until the copying is done (the
pipe's spin-lock has been released).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
By the time the working list of readers/writers is processed, it is
possible that waiting reader/writer being processed had timed out
and is no longer on the wait queue. As such, we can not blindly
wake the next thread as that next thread might not be the thread we
had just been processing.
To address this, the calls to z_sched_wake() have been replaced
with z_unpend_thread() and z_ready_thread() so that a specific
thread can be safely targeted for waking.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Uses the new z_sched_waitq_walk() routine to walk the pipe's wait
queue to build a list of waiting threads that will be used for
the data transfer.
This method is preferred over the previous as it ensures that
wait queue is safely traversed.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When a timer is restarted from a high priority interrupt, it may
happen that the timer is re-added to the timeout list right after
it is removed from that list prior to execution of its expiration
handler but before that execution actually occurs. This leads to
an assertion failure reported for `z_add_timeout()` because then
that function, called from `z_timer_expiration_handler()` for
periodic timers, turns out to be adding a timeout that is already
added to the timeout list.
This commit detects such situation in `z_timer_expiration_handler()`
and makes that function exit immediately when that occurs (as the
timer was restared, its expiration handler should not be executed).
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
Most of the time, z_cstart() is running on an arbitrary region
of memory as stack, where the necessary stack setup has not been
performed. This prevents stack protection to work correctly,
as the stack canary has not been populated. So mark z_cstart()
to have no stack protection at all inside the function to avoid
raising exception during boot.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This commit updates all in-tree code to use `CONFIG_CPP` instead of
`CONFIG_CPLUSPLUS`, which is now deprecated.
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
At least one static analysis tool is flagging a potential NULL
derefence in sys_clock_announce()'s tick processing loop where the
routine 'first()' is concerned. In practice, this does not occur as
...
1. The code in question is protected by a spinlock.
2. 'first()' does not change the contents of anything.
The code has consequently been tweaked to prevent similar such false
positives in the future.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Accurate timekeeping is something that is often taken for granted.
However, reliability of timekeeping code is critical for most core
and subsystem code. Furthermore, Many higher-level timekeeping
utilities in Zephyr work off of ticks but there is no way to modify
ticks directly which would require either unnecessary delays in
test code or non-ideal compromises in test coverage.
Since timekeeping is so critical, there should be as few barriers
to testing timekeeping code as possible, while preserving
integrity of the kernel's public interface.
With this, we expose `sys_clock_tick_set()` as a system call only
when `CONFIG_ZTEST` is set, declared within the ztest framework.
Signed-off-by: Chris Friedt <cfriedt@meta.com>
The following testcases fail with qemu_cortex_r5 caused by main stack
overflow.
tests/kernel/workq/work_queue/kernel.workqueue
tests/ztest/base/testing.ztest.base.verbose_0_userspace
The main stack size is 512 for qemu_cortex_r5(a Cortex-A/R aarch32
platform) with CONFIG_ZTEST=y. The Cortex-M platforms are already set to
1024. Likely 512 will fail for most aarch32 platforms soon.
Fix the issue by increasing the CONFIG_MAIN_STACK_SIZE to 1024.
Also, remove 'default 1024 if TEST_ARM_CORTEX_M' since Cortex-M is no
longer an exception of default 1024.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
- Logging supports printing 64-bit values now. Cast to unsigned long and
use %lu all times.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In z_phys_unmap(), the call to virt_region_free() is not using
aligned virtual address and space. This can result in freeing
smaller region that allocated given that inputs to z_phys_unmap()
may not be aligned. So use the already calculated aligned
virtual address and size as input to virt_region_free().
Note that the assertion and if-block in virt_region_free() to
check whether the to-be-unmapped region is within the virtual
memory region needs to be trimmed by one byte at the end.
The assertion and if-block are checking against the region
end address but (start + size) is just one byte over the end.
So subtract one.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The C++ standard requires the main() function to have the return type
of 'int' and does not allow the main() to be defined with the 'void'
return type. Moreover, GCC goes as far as to emit a hard error when the
'::main()' has the return type of `void`.
This commit introduces an option to instruct the Zephyr kernel to call
the 'int main(void)' instead of the 'void main(void)' in case a Zephyr
application defines main() in a C++ source file.
Signed-off-by: Stephanos Ioannidis <stephanos.ioannidis@nordicsemi.no>
Move runtime code to use arch_num_cpus() instead of CONFIG_MP_NUM_CPUS
and use CONFIG_MP_MAX_NUM_CPUS for ifdef and BUILD_ASSERT macros.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Cleanup the mess of duplicate function definitions, unnecessary
variables and duplicate strings. All banner strings are now constant in
ROM. Also fixes a double space between the end of the version string and
the trailing `***` when there is no boot delay.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
The BOOT_DELAY option does nothing in code if MULTITHREADING is not
enabled. Move the dependency to Kconfig instead.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Continue to phase out MP_NUM_CPUS, change Kconfig to be
MP_MAX_NUM_CPUS and make MP_MAX_NUM_CPUS the main Kconfig symbol.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
The dummy thread doesn't include a TLS area, so any thread local variables
will fail to work if used in the switched_out tracing hook. Skip the hook
in this case, as it's not really accurate anyways; the dummy thread is
only used to set context for the initial switch for each core.
Signed-off-by: Keith Packard <keithp@keithp.com>
Using char pointers for %p should be avoided in log messages. It will
cause issues in configurations where logging strings are removed from
the binary and they are not inspected when cbprintf packages from
logging string are built. In that case any char pointers are treated as
strings and copied into the pacakge body.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Introduce a Kconfig (MP_MAX_NUM_CPUS) and an api arch_num_cpus() to
allow for systems that might determine the number of CPUs available to
Zephyr at runtime.
CONFIG_MP_MAX_NUM_CPUS is intented to be use for any array initialization
and such that need to occur at build time. For most systems
arch_num_cpus() will just report the value of CONFIG_MP_MAX_NUM_CPUS.
The intent is to phase out CONFIG_NP_NUM_CPUS.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Warnings being treated as errors when building :
Error this 'for' clause does not guard...
[-Werror=misleading-indentation]
Signed-off-by: Francois Ramu <francois.ramu@st.com>
The _SYS_INIT_LEVEL* definitions were used to indicate the index entry
into the levels array defined in init.c (z_sys_init_run_level). init.c
uses this information internally, so there is no point in exposing this
in a public header. It has been replaced with an enum inside init.c. The
device shell was re-using the same defines to index its own array. This
is a fragile design, the shell needs to be responsible of its own data
indexing. A similar situation happened with some unit tests.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The function in charge of calling all init function was defined in
device.c, had a public prototype and was just used in init.c. Since this
is really an internal function tied to Kernel init code, move it to
init.c and make it static, there's no need to expose it publicly.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The `ARCH` init level was added to solve a specific problem, call init
code (SYS_INIT/devices) before `z_cstart` in the `intel_adsp` platform.
The documentation claims it runs before `z_cstart`, but this is only
true if the SoC/arch takes care of calling:
```c
z_sys_init_run_level(_SYS_INIT_LEVEL_ARCH);
```
Which is only true for `intel_adsp` nowadays. So in practice, we now
have a platform specific init level. This patch proposes to do things in
a slightly different way. First, level name is renamed to `EARLY`, to
emphasize it runs in the early stage of the boot process. Then, it is
handled by the Kernel (inside `z_cstart()` before calling
`arch_kernel_init()`). This means that any platform can now use this
level. For `intel_adsp`, there should be no changes, other than
`gcov_static_init()` will be called before (I assume this will allow to
obtain coverage for code called in EARLY?).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich). This
process happens with the caller's lock held, so local interrupts are
masked. But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!
Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also. Now we hold the scheduler lock between pend and
the final context switch.
Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.
[1] z_swap() is a historical API implemented in per-arch assembly for
older architectures (like ARM32!). It was designed to be called
with what at the time was a global IRQ lock, so it doesn't
understand the idea of a separate scheduler lock. When we finally
get all archictures on arch_switch() this design can be cleaned up
quite a bit.
Signed-off-by: Andy Ross <andyross@google.com>
We have cases where some devices needs to be initialized very early and
before c_start is call, i.e. to setup very early console or to setup
memory. Traditionally this would be hardcoded as part of the soc layer
and not using device model or the init levels.
This patch adds a new level ARCH, which will be called in early
architecture code and before we jump to the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap_aligned_alloc was not handling K_FOREVER timeout
correctly due to unsigned return value. Added explicit
K_FOREVER handling of end time.
Fixes#50611.
Signed-off-by: Jay Shoen <jay.shoen@perceive.io>
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.
CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.
Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.
Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Obtaining the CPU outside of the spin locks on SMP would
result in an assert failing on __ASSERT(!z_smp_mobile())
which makes sense as the current cpu may change.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
The requirement for k_yield() to handle "yielding" in the idle thread
was removed a while back, but it missed a spot where we'd try to yield
in the fallback loop on bringup platforms that lack an IPI. This now
crashes, because yield now unconditionally tries to reschedule the
current thread, which doesn't work for idle threads that don't live in
the run queue.
Just make it a busy loop calling swap(), even simpler.
Fixes#50119
Signed-off-by: Andy Ross <andyross@google.com>
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY=y, CPU mask
is fixed and cannot be changed while thread is running.
The current code asserts if thread state is anything but PREPARED.
We do however have interface like k_work_queue_start() where a thread is
started as part of the queue start. To allow user to set the pinned CPU
for the work queue thread, it needs to be possible to suspend the
thread, set the mask, and then call k_thread_resume(). This seems to be
a valid sequence, so relax the assert check to reflect this.
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
k_poll does not currently allow polling on pipes. This adds support
for doing so on buffered pipes.
Signed-off-by: Jeremy Herbert <jeremy.006@gmail.com>
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:
- the location of the cache handlers is preventing the linker to
actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
implemented (when userspace is not used the compiler simply optimizes
out these calls)
Fix the problem by:
- moving the userspace cache handlers to a their logical and proper
location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Many device pointers are initialized at compile and never changed. This
means that the device pointer can be constified (immutable).
Automated using:
```
perl -i -pe 's/const struct device \*(?!const)(.*)= DEVICE/const struct
device *const $1= DEVICE/g' **/*.c
```
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There's no point to doing this when the host OS clears all memory at
mapping time. And as it turns out, the __bss_end symbol it was
relying on actually comes from the host toolchain's linker, not our
own linker scripts (making it semi-dangerous to rely on). And it's
not present in clang/lld output anyway.
Signed-off-by: Andy Ross <andyross@google.com>
This new implementation of pipes has a number of advantages over the
previous.
1. The schedule locking is eliminated both making it safer for SMP
and allowing for pipes to be used from ISR context.
2. The code used to be structured to have separate code for copying
to/from a wating thread's buffer and the pipe buffer. This had
unnecessary duplication that has been replaced with a simpler
scatter-gather copy model.
3. The manner in which the "working list" is generated has also been
simplified. It no longer tries to use the thread's queuing node.
Instead, the k_pipe_desc structure (whose instances are on the
part of the k_thread structure) has been extended to contain
additional fields including a node for use with a linked list. As
this impacts the k_thread structure, pipes are now configurable
in the kernel via CONFIG_PIPES.
Fixes#47061
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.
This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.
Fixes#48056
Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
Fixes#46324
Set dummy_thread->base.slice_ticks to 0 when
CONFIG_TIMESLICE_PER_THREAD is set. To avoid
_current_cpu->slice_ticks be a big number.
Signed-off-by: Hu Zhenyu <zhenyu.hu@intel.com>
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
1. The handler invokes k_uptime_ticks().
2. k_uptime_ticks() invokes sys_clock_tick_get().
3. sys_clock_tick_get() must call elapsed() and not
sys_clock_elapsed() as we do not want to count any
unannounced ticks that may have elapsed while
processing the timer ISR.
Fixes#46378
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
There is no easy way to clear event bits without
the potential for a race to exist between producer(s)
and consumer(s). The result of this race is that events
can be lost through the various resetting mechanisms
available (flag to k_event_wait(), or k_event_set()).
Add k_event_set_masked() which permits bits to be set or cleared.
This allows consumers to clear just the bits that they have read
without (accidentally) discarding any new bits.
Update unit tests to verify the functionality.
Partly Fixes#46117.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
Although there is nothing wrong with the existing code,
it doesn't permit individual bits to be set (or cleared).
This makes further changes slightly awkward.
Use a mask to restrict the bits set in an event.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci
Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
Adds memory usage runtime stats routines that parallel those used
by both the heap and mem_blocks. This helps maintain some level of
of consistency across the different memory types.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Move scripts needed by the build system and not designed to be run
individually or standalone into the build subfolder.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Update the two locations that use two `SYS_INIT` macros with the same
initilisation functions to use `SYS_INIT_NAMED`.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Add a minimal EFI console driver to support printf, this console driver
only supports console output. Otherwise the printf will not work.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
Adds compatibility with Intel ADSP GDB from Zephyr SDK and
from Cadence toolchain to coredump_gdbserver.py.
Adds CAVS 15-25 (APL) register definitions. Implements
handle_register_single_read_packet to serve ADSP GDB
p packets.
Prevents BSA from changing between stack dump printout
and coredump by taking lock. Observed to be necessary for
accurate results on slower simulated platforms.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
This commit updates all deprecated `K_KERNEL_PINNED_STACK_ARRAY_EXTERN`
macro usages to use the `K_KERNEL_PINNED_STACK_ARRAY_DECLARE` macro
instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Files including <zephyr/kernel.h> do not have to include
<zephyr/zephyr.h>, a shim to <zephyr/kernel.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the symbols used to denote the locations of the global
constructor lists and modify the Zephyr start-up code accordingly.
On POSIX systems this ensures that the native libc init code won't
find any constructors to run before Zephyr loads.
Fixes#39347, #36858
Signed-off-by: David Palchak <palchak@google.com>
Use a new environment variable,
ZEPHYR_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE, to set the value for
TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE instead of setting it to 'n' for
all non-Zephyr toolchains. In particular, the Debian arm-none-eabi
toolchain has TLS support and with this option, can be used to build
Zephyr with thread local variables.
Signed-off-by: Keith Packard <keithp@keithp.com>
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable. Do what we say we're supposed to do.
This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition. Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success". Check that we actually enter
the handler.
Fixes#45545
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The function k_thread_runtime_stats_all_get() now populates the
current_cycles field in the thread runtime stats structure.
Resets the number of cycles in the CPU's current usage window once
the idle thread is scheduled.
Fixes the average_cycles calcuation.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
For a library which already provides a multi-thread aware errno, use
that instead of creating our own internal value.
Signed-off-by: Keith Packard <keithp@keithp.com>
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().
When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.
Fixes#45267
Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This adds lazy floating point context switching. On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away. If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored. If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.
The undefined instruction handler is responsible for saving away the
floating point context if needed. If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved. Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The k_timer utility was written to assume that the kernel timeout
handler would never be delayed by more than a tick, so it can naively
reschedule the next interrupt with a simple delay.
Unfortunately real platforms have glitchy hardware and high tick
rates, and on intel_adsp we're seeing this promise being broken in
some circumstances.
It's probably not a good idea to try to plumb the timer driver
interface up into the IPC layer to do this correction, but thankfully
the existing absolute timeout API provides the tools we need (though
it does require that CONFIG_TIMEOUT_64BIT be enabled).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.
This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI. Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.
Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap). This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible. Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.
Really, this is only a half fix. Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule(). But that would take some surgery, so let's go with
the simpler cleanup first.
This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.
[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Removes an unnecessary schedule lock/unlock pair from k_mutex_unlock().
Rationale: Given that only the current thread (which would also be the
mutex owner) will be able to modify the mutex object AND that a
recursive unlock ought never trigger any reschedule (as it does not
touch the pend queue), then performing a schedule lock is not needed
prior to testing for a recursive unlock.
Furthermore, even if it is not a recursive unlock, then a schedule lock
is superfluous as the existing spinlock provides sufficient protection.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When threads are in more than one state at a time, k_thread_state_str()
returns a string that lists each of its states delimited by a '+'.
This in turn necessitates a change to the API that includes both a
pointer to the buffer to use for the string and the size of the buffer.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Add an API that clears cpu mask from a thread and sets it to a specific
CPU.
This is the equivelent of:
k_thread_cpu_mask_clear(&thread);
k_thread_cpu_mask_enable(&thread, cpu_idx);
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Instead of resizing all devices handles, we just resize devices that are
power domains. This means that a power domain has to be declared as
compatbile with "power-domain" in device tree node.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add API to add devices to a power domain in runtime. The number of
devices that can be added is defined in build time.
The script gen_handles.py will check the number defined in
`CONFIG_PM_DEVICE_POWER_DOMAIN_DYNAMIC` to resize the handles vector,
adding empty slots in the supported sector to be used later.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list). The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).
Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit. The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.
Fixes#44758
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).
When that happens, each call will independently enter the loop over
the timeout list. The access is correctly synchronized, so the list
handling is correct. But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus. That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".
That's surprising and error-prone. Don't do it. Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.
(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver. That hardware issue is still poorly understood, but this fix
is desirable regardless.)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The idle thread got an index suffix in #23536 to make it easier to
identify different idle threads on different cores. This looks out of
place on single-core devices when the idle thread is listed next to
other kernel threads, such as main.
Remove the idle thread index on single-core platforms, and replace all
references to this format in tests and documentation.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
This is an attempt at formally distinguishing and supporting the case
described in 40795 where an architecture doesn't preserve/restore the
complete thread state upon entering/exiting interrupt exception state.
This is mainly about promoting the current behavior from the accepted
workaround to a formal API specification. This workaround is currently
used on ARM64 but RISC-V requires it too.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
A reference to the entropy device can be obtained at compile time, so
avoid using device_get_binding().
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There is an API to get an specific number of random bytes. There is
no need to re-implement this logic here.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...
This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
According to Kconfig guidelines, boolean prompts must not start with
"Enable...". The following command has been used to automate the changes
in this patch:
sed -i "s/bool \"[Ee]nables\? \(\w\)/bool \"\U\1/g" **/Kconfig*
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Things had gotten a little tangled in there so let's do some cleanup.
Remove the distressingly-special-purpose z_reinit_idle_thread() hook
(which existed to support secondary core bringup when
SMP_BOOT_DELAY=y), and just fold that into a generic z_init_cpu(),
which we can call in obvious and symmetric ways from main
initialization, z_smp_init(), and z_smp_start_cpu() (the now-official
programmatic hook for starting cpus).
Remove the "#if CONFIG_MP_NUM_CPUS > 1" exclusions. These weren't
saving any code size and were propagating themselves into platform
layers trying to avoid build failures.
There are some "special" APIs added for SOF which need to go away in
favor of the newer/generic z_smp_start_cpu(). Collect them in one
place and put them under a "#ifdef CONFIG_SOF" to prevent them from
being used in Zephyr apps.
Move some function declarations that didn't have homes into
<kernel/thread.h>.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds a LOG_DBG() line for z_phys_unmap which mirrors
what is in z_phys_map(). This also fixes a warning from
Clang about a variable being set but never used (addr_offset).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Commit 678b76e4b0 ("kernel/init.c: allow for memset/memcpy
alternatives during early boot") and commit da28829b64 ("kernel:
zero the bss section of OCM memory at boot time") were created
independently and missed changes from each other.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The x86 and xtensa implementations of irq_offload() invoke synchronous
interrupts on the local CPU, and are therefore safe to use from within
an interrupt context. This is a cheap and portable way to exercise
nested interrupts, which are otherwise highly platform-dependent to
test. Add a kconfig to signal the capability.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Zeroing the BSS and copying data to RAM with regular memset/memcpy may
cause problems when those functions are assuming a fully initialized
system for their optimizations to work e.g. some instructions require
an active MMU, but turning the MMU on needs the .bss section to be
cleared first, etc.
Commit c5b898743a ("aarch64: Fix alignment fault on z_bss_zero()")
provides a detailed explanation of such a case.
Replacing z_bss_zero() with an architecture specific one is problematic
as the former may see new sections added to it that would be missed by
the later. The same reasoning goes for z_data_copy().
Let's make maintenance much easier by providing weak versions of
memset/memcpy that can be overridden by architecture-specific safe
versions when needed.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Extracting stack usage calculation from k_thread_stack_space_get to
z_stack_space_get so it can be used also for interrupt stack.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
If a chosen entry exists for a memory area of type OCM, zero the OCM
memory's bss section at boot-time.
Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.
Fixes#41422
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
So that logging and "satellite" subsystems, such as tracing and object
tracking can count on kernel structs being properly initialised, such
as `_current_cpu`.
Fixes#42061.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Updates sched_cpu_update_usage() such that the CPU runtime stats
only update the its non-idle time when the current thread is not the
idle thread. This is necessary as otherwise the CPUs idle-time will
be double counted in k_thread_runtime_stats.execution_cycles.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Introduce a hidden kconfig CONFIG_KERNEL_VM_SUPPORT which
enables some kconfigs that are required for virtual memory
support. CONFIG_KERNEL_VM_BASE, CONFIG_KERNEL_VM_OFFSET,
and CONFIG_KERNEL_VM_SIZE are moved under this new kconfig
so these can be enabled independent of CONFIG_MMU.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves CONFIG_MMU and its children from arch/Kconfig into
kernel/Kconfig. These are used to enable kernel support of MMU
so they should be under kernel/.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is no need to use conditional compilation for the function
prototypes in the kernel architecture header file. So remove it.
Added bouns is that these functions can appear in documentation
without explicitly enabled in pre-defines during doc build.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Commit 3457118 changed the sequence of z_smp_thread_init() and
smp_timer_init() in smp_init_top() subroutine, which initializes
other BPs. In some boards (up_squared, acrn_ehl_crb) it will fail to
work while SMP enabling. If the timer interrupt is enabled before the
first thread was initialized. Now change back to its original order.
Fixes#41835
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This has bitrotten a bit. Early implementations had a synchronous
arch_start_cpu(), but then we started allowing that to be an async
operation. But that means that CPU start now becomes surprisingly
reentrant to the arch layer (cpu 0 can get a call to start cpu 2 while
cpu 1's initialization code is still running). That's just error
prone; we never documented the requirements cleanly (the window is
very small, but not so small to a slow simulator!).
Add an extra flag so we don't issue the next start until the last is
out of the arch layer and running in smp_init_top().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adds two routines to flush pipe objects:
k_pipe_flush()
- This routine flushes the entire pipe. That includes both
the pipe's buffer and all pended writers. It is equivalent
to reading everything into a giant temporary buffer which
is then discarded.
k_pipe_buffer_flush()
- This routine flushes only the pipe's buffer (if it exists).
It is equivalent to reading a maximum of "buffer size" bytes
into a temporary buffer which is then discarded.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Fixes a race condition in the k_pipe_cleanup() routine by adding
a spinlock. Additionally, internal counters are now reset after
freeing the buffer as the pipe has now become a bufferless pipe.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Resolves void pointer arithmetic build warnings in k_pipe_put() by
casting the pointer to a uint8_t pointer.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When the new Kconfig option CONFIG_SCHED_THREAD_USAGE_ANALYSIS
is enabled, additional timing stats are collected during context
switches. This extra information allows a developer to obtain the
the current, longest, average and total lengths of the time that
a thread has been scheduled to execute.
A developer can in turn use this information to tune their app and/or
alter their scheduling policies.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.
Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The functionality provided by device_usable_check is already provided by
device_is_ready. The (z_)device_usable_check APIs have been
re-implemented using the (z_)device_is_ready APIs and have been marked
as deprecated.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Instead of using device_usable_check() syscall, implement a new syscall
for device_is_ready that uses z_device_is_ready underneath.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename z_device_ready to z_device_is_ready. Function name suggests a
boolean result this way, in line with other functions (e.g.
device_is_ready).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The resource pool of the short-lived dummy thread "stub" may be
inherited by other threads created during system initialization. This
commit initializes this resource pool to NULL or the system pool to
ensure that a well-defined resource pool propagates to other threads
that inherit it from the dummy thread.
Fixes#41482.
Signed-off-by: Berend Ozceri <berend@recogni.com>
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.
Signed-off-by: Jeremy Bettis <jbettis@google.com>
Previous commit 55350a93e9 fixing
address-of-packed-mem warnings uncovered an issue with
the alignment of dynamic kernel objects. On 64-bit platforms,
the alignment is 16 bytes instead of 4/8 bytes (as in pointer,
void *). This changes the function of mapping between kernel
object types and alignments to use the dynamic object struct
as basis for alignment instead of simply using pointers.
This also uncomments the assertion added in the previous commit
55350a93e9 so that we can keep
an eye on the alignment in the future. Note that the assertion
is moved after checking if the incoming kernel object is
dynamically allocated. Static kernel objects are not subjected
to this alignment requirement.
Fixes#41062
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Applies the 'static' keyword to the following inlined routines:
z_priq_dumb_add()
z_priq_mq_add()
z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Removed unused functions, or moved inside #ifdefs.
This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.
Signed-off-by: Jeremy Bettis <jbettis@google.com>
The warning below appears once -Waddress-of-packed-mem is enabled:
/home/carles/src/zephyr/zephyr/kernel/userspace.c: In function
'unref_check':
/home/carles/src/zephyr/zephyr/kernel/userspace.c:471:28: warning:
converting a packed 'struct z_object' pointer (alignment 4) to a 'struct
dyn_obj' pointer (alignment 16) may result in an unaligned pointer value
[-Waddress-of-packed-mem
ber]
471 | CONTAINER_OF(ko, struct dyn_obj, kobj);
To avoid the warning, use an intermediate void * variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:
/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
383 | sys_slist_append(&free_page_frame_list, &pf->node);
This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.
Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Removing the 'U' to avoid the type of num_events changed.
And make sure it is meaningful Z_SYSCALL_VERIFY micro.
Fixed#40614
Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.
Implements #37793.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This updates k_mem_domain_add_thread() to return errors so
the application has a chance to recover.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This changes both k_mem_domain_add_partition() and
k_mem_domain_remove_partition() to return errors instead of
asserting when errors are encountered. This gives the application
chance to recover.
The arch_mem_domain_parition_add()/_remove() will be modified
later together with all the other arch_mem_domain_*() changes
since the architecture code for partition addition and removal
functions usually cannot be separately changed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This changes k_mem_domain_init() to return error values
instead of asserting when errors are encountered.
This gives applications a chance to recover if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Remove LOG_MINIMAL kconfig option which was confusing
since LOG_MODE_MINIMAL existed. LOG_MINIMAL was used to
force minimal mode but because of invalid dependencies
it was leading to issues.
Refactored code to use LOG_MODE_MINIMAL everywhere and
renamed LOG_MINIMAL to LOG_DEFAULT_MINIMAL which has impact
on defualt logging mode (which still can be later changed
in conf file or in menuconfig).
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Added heap reference parameter to k_free tracing
hook to allow tracing of the pointer which was
passed as a parameter to a k_free call.
As part of this update the defines
(for this hook) in the various tracing formats
was also updated.
Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
With `gen_handles.py` now running on the first pre-built image,
`zephyr_pre0.elf` there is no requirement for the device handle arrays
to remain the same size after processing.
Remove the padding generated in `gen_handles.py`, as well as the
temporary option `CONFIG_DEVICE_HANDLE_PADDING` which was added to work
around this issue.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.
Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.
Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads. The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Clean up RUNTIME_STATS to separate the API from the individual data
backends. Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles. Move the kconfig for that into the runtime
stats menu, since it's part of the family now.
Also remove a lot of needless #if's around the declarations. Unused
structs and uncalled functions don't need to be explicitly hidden. An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.
(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines. It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC. Those days are behind
us and "cycles" can be fast everywhere.)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.
This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight. And
eventually it will go away as these architectures migrate.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add spinlock unlocking before calling timer expiration
handler. Locking was introduced by dde3d6c.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation. The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released. But that lock is supposed to be
atomic with the thread suspend.
This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.
Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die. At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.
Fixes#39575
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.
The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The k_work::flags field is not an atomic_t and would cause
-Wpointer-sign warning on some compilers. This function was the only
one in work.c to use atomic_get() so there is no benefit to atomicity.
Signed-off-by: Chris Reed <chris.reed@arm.com>
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.
And a huge mapping will consume all possible L3 tables.
In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.
This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
up to the possible aligned virtual address
- freeing the supplementary pages used for alignment
Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
When CONFIG_USERSPACE is enabled, the ELF file from linker pass 1 is
used to create a hash table that identifies kernel objects by address.
We therefore can't allow the size of any object in the pass 2 ELF to
change in a way that would change those addresses, or we would create
a garbage hash table.
Simultaneously (and regardless of CONFIG_USERSPACE's value),
gen_handles.py must transform arrays of handles from their pass 1
values to their pass 2 values; see the file's docstring for more
details on that transformation.
The way this works is that gen_handles.py just pads out each pass 2
array so its length is the same as its pass 1 value. The padding value
is a repeated run of DEVICE_HANDLE_ENDS values. This value is the
terminator which we look for at runtime in places like
device_required_handles_get(), so there must be at least one, and we
error out in gen_handles.py if there's no room in the pass 2 array for
at least one such value. (If there is extra room, we just keep
inserting extra DEVICE_HANDLE_ENDS values to pad the array to its
original length.)
However, it is possible that a device has more direct dependencies in
the pass 2 handles array than its corresponding devicetree node had in
the pass 1 array. When this happens, users have no recourse, so that's
a potential showstopper.
To work around this possibility for now, add a new config option,
CONFIG_DEVICE_HANDLE_PADDING, whose value defaults to 0.
When nonzero, it is a count of padding handles that are inserted into
each device handles array. When gen_handles.py errors out due to lack
of room, its error message now tells the user how much to increase
CONFIG_DEVICE_HANDLE_PADDING by to work around the problem.
It looks like a real fix for this is to allocate kernel objects whose
addresses are required for hash tables in CONFIG_USERSPACE=y
configurations *before* the handle arrays. The handle arrays could
then be resized as needed in pass 2, which saves ROM by avoiding
unnecessary padding, and would avoid the need for
CONFIG_DEVICE_HANDLE_PADDING altogether.
However, this 'real fix' is not available and we are facing a deadline
to get a temporary solution in for Zephyr v2.7.0, so this is a good
enough workaround for now.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
This reverts commit b01e41ccdd.
It's not clear that the supported devices are being properly computed,
so let's revert this for v2.7.0 until we've had more time to think
it through.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
before running timer's timeout function, we need to make
sure that those threads waiting on this timer have been
added into the timer's wait queue, so add operations to
use timer lock to mask interrupts in z_timer_expiration_handler
function to synchronize timer's wait queue.
Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.
In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one. This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread. It's a mild optimization, but a basically simple
one.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object. Pure refactoring.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).
Refactor into a simpler API that places the access to the run queue in
just a single spot.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Pure refactoring. For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adding missing parenthesis. Without them wrong results
appeared when k_cycle_get_32 wrapped.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.
Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.
Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some architectures already returns -ENOTSUP when these functions
are called. So add this return value to the API doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a SOC API to allow for application control over deep idle power
states. Note that the hardware idle entry happens out of the WAITI
instruction, so the application has to be responsibile for ensuring
the CPU to be halted actually reaches idle deterministically. Lots of
warnings in the docs to this effect.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit removes the `timeout_q` from the `struct z_kernel` since it
is no longer used.
Note that the new kernel timeout implementation introduced in the
commit 987c0e5fc1 uses `timeout_list`
global variable in place of it.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
To support arm-ds / armlink it is required that the weak main is located
in an object externally to the object using the weak symbol.
If the weak symbol is inside the object referring to it, then the weak
symbol will be used and this will result in
```
Error: L6200E: Symbol __ARM_use_no_argv multiply defined
(by init.o and main.o).
```
as both the weak and strong symbols are used.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for __itcm_load_start and
__dtcm_data_load_start to other symbols and in such a way they follow
consistent pattern which allows for linker script and scatter file
generation.
The symbols are named according to the section name they describe.
Section names are itcm and dtcm.
The following symbols are aligned in this commit:
- __itcm_rom_start -> __itcm_load_start
- __dtcm_data_rom_start -> __dtcm_data_load_start
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.
The symbols are named according to the section name they describe.
Section name is `ramfunc`
The following symbols are aligned in this commit:
- _ramfunc_ram_start -> __ramfunc_start
- _ramfunc_ram_end -> __ramfunc_end
- _ramfunc_ram_size -> __ramfunc_size
- _ramfunc_rom_start -> __ramfunc_load_start
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for _data_ram/rom to other symbols and in
such a way they follow consistent pattern which allows for linker script
and scatter file generation.
The symbols are named according to the section name they describe.
Section name is `data`
A new group named data_region is introduced which instead spans all the
input and output sections that was previously covered by
__data_ram_start, __data_ram_end, and __data_rom_start.
The following symbols are aligned in this commit:
- __data_ram_start -> __data_region_start
- __data_ram_end -> __data_region_end
- __data_rom_start -> __data_region_load_start
The following new symbols are introduced so that the data section is
aligned with other sections:
- __data_end
- __data_start
value identical to __data_region_start but describes start of
the section.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
The initialization of the struct pm_device pm field found in the device
state can be statically initialized without the need of doing it at
runtime.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
With demand paging, the heap object and its backing memory
may not be in physical memory. So initialize those heaps
in pinned region at PRE_KERNEL_1 and the remaining heaps
once paging mechanism has been initialized.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows memory partitions to be put into the pinned
section so they are available during boot. For example,
the stack guard (in libc partition) is needed during boot
but before the paging mechanism is initialized. Without
pinning it in physical memory, it would fault in early
boot process.
A new cmake property app_smem,pinned_partitions is
introduced so that additional partitions can be pinned
if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The beginning of code in do_page_fault() is to pin the page
in memory if it is already present in physical memory.
It is there so that if a page is not present, it can proceed
to perform page-in and then pin it. So the counting of
page faults needs to be moved after the pinning code so
it actually counts page faults, and not counting pinning
operations when the page is already present.
Also clarify the comment on the goto statement as it is not
correct.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The z_main_stack is needed before paging mechanism is initialized
so put the stack into the pinned section to avoid page faults.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In do_page_fault(), the incoming page fault address is not
aligned, and it was unconditionally assigned to the page
frame virtual address field. If the backing store simply
returns the virtual address without processing in
k_mem_paging_backing_store_location_get(), this unaligned
address will be passed to arch_mem_page_out(). On x86,
it is further passed to range_map() which asserts if
the physical address is not page aligned. So align
the address to page size before assigning it to the page
frame virtual address field.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
k_work_queue_start receives a struct that is expected to be
uninitialized (zeroed). Otherwise the behavior is undefined.
Following the Zephyr semantics, this pr introduce a new init function
for this struct.
Fixes#36865
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.
Implements #37793.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This adds a very primitive logic to allow linking a prebuilt
static library of kernel code instead of building the kernel
from source. Note that the library is built with a specific
set of kconfigs, and they must match when building applications,
or else there would be mysterious crashes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
While reading the code, found some typos in the code comments,
line 226 and 668.
Fix comments to make it more solid.
Signed-off-by: Naiyuan Tian <naiyuan.tian@intel.com>