Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.
Signed-off-by: Simon Hein <Shein@baumer.com>
This header does not expose any public APIs, so move it under
kernel/include and change files including it.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.
This will allow for support in more tools and less reliance on GDB for
tracing objects.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The mailbox and msgq utilities had API variants that could pass old
mem_pool blocks through the data structure. That API is being
deprected (and the features were obscure), so remove the internal
support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.
A coccinelle rule is used for this:
@r_const_dev_1
disable optional_qualifier
@
@@
-struct device *
+const struct device *
@r_const_dev_2
disable optional_qualifier
@
@@
-struct device * const
+const struct device *
Fixes#27399
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove leading/trailing blank lines in .c, .h, .py, .rst, .yml, and
.yaml files.
Will avoid failures with the new CI test in
https://github.com/zephyrproject-rtos/ci-tools/pull/112, though it only
checks changed files.
Move the 'target-notes' target in boards/xtensa/odroid_go/doc/index.rst
to get rid of the trailing blank line there. It was probably misplaced.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.
This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.
The refactoring strategy used in this commit is detailed in the issue
This commit introduces the following major changes:
1. Establish a clear boundary between private and public headers by
removing "kernel/include" and "arch/*/include" from the global
include paths. Ideally, only kernel/ and arch/*/ source files should
reference the headers in these directories. If these headers must be
used by a component, these include paths shall be manually added to
the CMakeLists.txt file of the component. This is intended to
discourage applications from including private kernel and arch
headers either knowingly and unknowingly.
- kernel/include/ (PRIVATE)
This directory contains the private headers that provide private
kernel definitions which should not be visible outside the kernel
and arch source code. All public kernel definitions must be added
to an appropriate header located under include/.
- arch/*/include/ (PRIVATE)
This directory contains the private headers that provide private
architecture-specific definitions which should not be visible
outside the arch and kernel source code. All public architecture-
specific definitions must be added to an appropriate header located
under include/arch/*/.
- include/ AND include/sys/ (PUBLIC)
This directory contains the public headers that provide public
kernel definitions which can be referenced by both kernel and
application code.
- include/arch/*/ (PUBLIC)
This directory contains the public headers that provide public
architecture-specific definitions which can be referenced by both
kernel and application code.
2. Split arch_interface.h into "kernel-to-arch interface" and "public
arch interface" divisions.
- kernel/include/kernel_arch_interface.h
* provides private "kernel-to-arch interface" definition.
* includes arch/*/include/kernel_arch_func.h to ensure that the
interface function implementations are always available.
* includes sys/arch_interface.h so that public arch interface
definitions are automatically included when including this file.
- arch/*/include/kernel_arch_func.h
* provides architecture-specific "kernel-to-arch interface"
implementation.
* only the functions that will be used in kernel and arch source
files are defined here.
- include/sys/arch_interface.h
* provides "public arch interface" definition.
* includes include/arch/arch_inlines.h to ensure that the
architecture-specific public inline interface function
implementations are always available.
- include/arch/arch_inlines.h
* includes architecture-specific arch_inlines.h in
include/arch/*/arch_inline.h.
- include/arch/*/arch_inline.h
* provides architecture-specific "public arch interface" inline
function implementation.
* supersedes include/sys/arch_inline.h.
3. Refactor kernel and the existing architecture implementations.
- Remove circular dependency of kernel and arch headers. The
following general rules should be observed:
* Never include any private headers from public headers
* Never include kernel_internal.h in kernel_arch_data.h
* Always include kernel_arch_data.h from kernel_arch_func.h
* Never include kernel.h from kernel_struct.h either directly or
indirectly. Only add the kernel structures that must be referenced
from public arch headers in this file.
- Relocate syscall_handler.h to include/ so it can be used in the
public code. This is necessary because many user-mode public codes
reference the functions defined in this header.
- Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
necessary to provide architecture-specific thread definition for
'struct k_thread' in kernel.h.
- Remove any private header dependencies from public headers using
the following methods:
* If dependency is not required, simply omit
* If dependency is required,
- Relocate a portion of the required dependencies from the
private header to an appropriate public header OR
- Relocate the required private header to make it public.
This commit supersedes #20047, addresses #19666, and fixes#3056.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The k_stack data type cannot be u32_t on a 64-bit system as it is
often used to store pointers. Let's define a dedicated type for stack
data values, namely stack_data_t, which can be adjusted accordingly.
For now it is defined to uintptr_t which is the integer type large
enough to hold a pointer, meaning it is equivalent to u32_t on 32-bit
systems and u64_t on 64-bit systems.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch. The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.
Just refactoring. No logic changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
According with MISRA-C an object should be defined in a block scope if
it is used in a single function.
MISRA-C rule 8.9
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
A final else statement must be provided when an if statement is
followed by one or more else if.
MISRA-C rule 15.7
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Checking the return of some scattered functions across kernel.
MISRA-C requires that all non-void functions have their return value
checked, though, in some cases there is nothing to do. Just
acknowledging it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
memcpy always return a pointer to dest, it can be ignored. Just making
it explicitly so compilers will never raise warnings/errors to this.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mailbox code was written to use the _remove_thread_from_ready_q()
API directly, which would be good to get out of the scheduler internal
API. What it really wanted to do is to mark a thread "PENDING"
without actually adding it to a wait queue, which is sane enough (the
message stores the "thread to wake up on receipt" handle).
So allow that naturally in the _pend_thread() API by passing a NULL
wait_q. Really a wait_q needn't be the only way a thread can block.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_Swap() is defined in nano_internal.h. Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file). A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle. Now nothing sees _Swap() defined
anymore. Put nano_internal.h everywhere it's needed.
Our kernel includes remain a big awful yucky mess. This makes things
more correct but no less ugly. Needs cleanup.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?
Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>