Commit graph

2629 commits

Author SHA1 Message Date
Andy Ross 52351458f4 kernel/sched: Add timing.h support to thread_usage
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.

(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines.  It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC.  Those days are behind
us and "cycles" can be fast everywhere.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross b62d6e17a4 kernel/sched: Add an optional "all" counter for thread_usage
Tally the runtime of all non-idle threads.  Make it optional via
kconfig to avoid overhead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 4ae3250301 sched: Hook SCHED_USAGE from existing tracing hook
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.

This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight.  And
eventually it will go away as these architectures migrate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Krzysztof Chruscinski 8979fbc906 kernel: timer: Call user handler without spinlock
Add spinlock unlocking before calling timer expiration
handler. Locking was introduced by dde3d6c.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-08 11:05:49 -05:00
Flavio Ceolin 9444480c7b pm: Better return type for pm_system_suspend
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-06 10:21:53 -04:00
Andy Ross 7dee7a6139 kernel/sched: Fix race with thread return values
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation.  The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released.  But that lock is supposed to be
atomic with the thread suspend.

This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.

Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die.  At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.

Fixes #39575

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-10-25 12:31:06 +02:00
Chris Reed f8f3c00d07 kernel: thread.c: remove unused THREAD_COOKIE macro.
This macro was not reference anywhere.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-22 18:08:56 -04:00
Peter Mitsis ae394bff7c kernel: add support for event objects
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.

The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-10-16 06:27:10 -04:00
Chris Reed ab4d69baf3 kernel: work_q: use flags_get() in work_delayable_busy_get_locked().
The k_work::flags field is not an atomic_t and would cause
-Wpointer-sign warning on some compilers. This function was the only
one in work.c to use atomic_get() so there is no benefit to atomicity.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-16 06:23:46 -04:00
Neil Armstrong 2f359aeacf mmu: fix virt_region_alloc() unused region free when aligned
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-13 06:24:56 -04:00
Neil Armstrong 7830f87ccd mmu: get virtual alignment from physical address
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.

And a huge mapping will consume all possible L3 tables.

In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.

This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
  up to the possible aligned virtual address
- freeing the supplementary pages used for alignment

Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-11 21:00:28 -04:00
Stefan Eicher fbe7d7297e comments: minor typo fixes
Fixing minor typos in comments

Signed-off-by: Stefan Eicher <stefan.eicher@ypsomed.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-10-05 07:18:13 -04:00
Martí Bolívar a627666e06 device: add fudge factor for handle padding
When CONFIG_USERSPACE is enabled, the ELF file from linker pass 1 is
used to create a hash table that identifies kernel objects by address.
We therefore can't allow the size of any object in the pass 2 ELF to
change in a way that would change those addresses, or we would create
a garbage hash table.

Simultaneously (and regardless of CONFIG_USERSPACE's value),
gen_handles.py must transform arrays of handles from their pass 1
values to their pass 2 values; see the file's docstring for more
details on that transformation.

The way this works is that gen_handles.py just pads out each pass 2
array so its length is the same as its pass 1 value. The padding value
is a repeated run of DEVICE_HANDLE_ENDS values. This value is the
terminator which we look for at runtime in places like
device_required_handles_get(), so there must be at least one, and we
error out in gen_handles.py if there's no room in the pass 2 array for
at least one such value. (If there is extra room, we just keep
inserting extra DEVICE_HANDLE_ENDS values to pad the array to its
original length.)

However, it is possible that a device has more direct dependencies in
the pass 2 handles array than its corresponding devicetree node had in
the pass 1 array. When this happens, users have no recourse, so that's
a potential showstopper.

To work around this possibility for now, add a new config option,
CONFIG_DEVICE_HANDLE_PADDING, whose value defaults to 0.

When nonzero, it is a count of padding handles that are inserted into
each device handles array. When gen_handles.py errors out due to lack
of room, its error message now tells the user how much to increase
CONFIG_DEVICE_HANDLE_PADDING by to work around the problem.

It looks like a real fix for this is to allocate kernel objects whose
addresses are required for hash tables in CONFIG_USERSPACE=y
configurations *before* the handle arrays. The handle arrays could
then be resized as needed in pass 2, which saves ROM by avoiding
unnecessary padding, and would avoid the need for
CONFIG_DEVICE_HANDLE_PADDING altogether.

However, this 'real fix' is not available and we are facing a deadline
to get a temporary solution in for Zephyr v2.7.0, so this is a good
enough workaround for now.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2021-09-30 21:37:59 -04:00
Martí Bolívar b94a11f5d0 Revert "device: supported devices visitor API"
This reverts commit b01e41ccdd.

It's not clear that the supported devices are being properly computed,
so let's revert this for v2.7.0 until we've had more time to think
it through.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2021-09-30 21:37:59 -04:00
Chen Peng1 dde3d6ca9b timer: mask interrupts in timer's timeout handler.
before running timer's timeout function, we need to make
sure that those threads waiting on this timer have been
added into the timer's wait queue, so add operations to
use timer lock to mask interrupts in z_timer_expiration_handler
function to synchronize timer's wait queue.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-29 09:18:12 -04:00
Andy Ross b11e796c36 kernel/sched: Add CONFIG_CPU_MASK_PIN_ONLY
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.

In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one.  This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread.  It's a mild optimization, but a basically simple
one.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross b155d06712 kernel/sched: Factor out ready_q initialization
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object.  Pure refactoring.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross 387fdd2e53 kernel/sched: Refactor/simplify run queue accessors
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).

Refactor into a simpler API that places the access to the run queue in
just a single spot.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross c230fb3580 kernel/sched: Simply de/queue_thread()
Pure refactoring.  For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Krzysztof Chruscinski 76c309db7b kernel: thread: Fix thread runtime stats
Adding missing parenthesis. Without them wrong results
appeared when k_cycle_get_32 wrapped.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-09-23 11:31:02 -04:00
Chen Peng1 0f63d1135c cmsis_rtos_v1: fix thread instances management.
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-09 12:01:06 -04:00
Andy Ross 0d763e0a10 cmake/compiler/xcc: sched: Support XCC inlining semantics
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.

Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-08 09:28:31 -04:00
Christopher Friedt 86f94ebaa6 kernel: init: remove empty lcov exclusion
Remove the empty lcov exclusion left over from the weak main
function.

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2021-09-06 08:18:15 -04:00
Daniel Leung 049e3bac73 kernel: add -ENOTSUP doc to arch_float_en-/dis-able()
Some architectures already returns -ENOTSUP when these functions
are called. So add this return value to the API doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Andy Ross c6d077e1bc soc: intel_adsp/cavs_v25: Add CPU halt and relaunch APIs
Add a SOC API to allow for application control over deep idle power
states.  Note that the hardware idle entry happens out of the WAITI
instruction, so the application has to be responsibile for ensuring
the CPU to be halted actually reaches idle deterministically.  Lots of
warnings in the docs to this effect.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Stephanos Ioannidis a77e7566a8 kernel: Remove unused timeout_q from z_kernel
This commit removes the `timeout_q` from the `struct z_kernel` since it
is no longer used.

Note that the new kernel timeout implementation introduced in the
commit 987c0e5fc1 uses `timeout_list`
global variable in place of it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-09-01 19:51:08 -04:00
Torsten Rasmussen 25e1b12ec0 kernel: extract __weak main() into independent file
To support arm-ds / armlink it is required that the weak main is located
in an object externally to the object using the weak symbol.

If the weak symbol is inside the object referring to it, then the weak
symbol will be used and this will result in
```
Error: L6200E: Symbol __ARM_use_no_argv multiply defined
    (by init.o and main.o).
```
as both the weak and strong symbols are used.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen a28830b811 linker: align __itcm_load_start / __dtcm_data_load_start linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for __itcm_load_start and
__dtcm_data_load_start to other symbols and in such a way they follow
consistent pattern which allows for linker script and scatter file
generation.

The symbols are named according to the section name they describe.
Section names are itcm and dtcm.

The following symbols are aligned in this commit:
-  __itcm_rom_start      -> __itcm_load_start
-  __dtcm_data_rom_start -> __dtcm_data_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 510d7dbfb6 linker: align _ramfunc_ram/rom_start/size linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `ramfunc`

The following symbols are aligned in this commit:
-  _ramfunc_ram_start  -> __ramfunc_start
-  _ramfunc_ram_end    -> __ramfunc_end
-  _ramfunc_ram_size   -> __ramfunc_size
-  _ramfunc_rom_start  -> __ramfunc_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 65a2de84a9 linker: align __data_ram/rom_start/end linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _data_ram/rom to other symbols and in
such a way they follow consistent pattern which allows for linker script
and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `data`

A new group named data_region is introduced which instead spans all the
input and output sections that was previously covered by
__data_ram_start, __data_ram_end, and __data_rom_start.

The following symbols are aligned in this commit:
-  __data_ram_start  -> __data_region_start
-  __data_ram_end    -> __data_region_end
-  __data_rom_start  -> __data_region_load_start

The following new symbols are introduced so that the data section is
aligned with other sections:
-  __data_end
-  __data_start
       value identical to __data_region_start but describes start of
       the section.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Gerard Marull-Paretas 9917de4a1b pm: fully initialize pm_device on Z_DEVICE_STATE_DEFINE
The initialization of the struct pm_device pm field found in the device
state can be statically initialized without the need of doing it at
runtime.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-08-27 09:06:51 -04:00
Daniel Leung 5c4fff3998 kernel: kheap: make init work with demand paging
With demand paging, the heap object and its backing memory
may not be in physical memory. So initialize those heaps
in pinned region at PRE_KERNEL_1 and the remaining heaps
once paging mechanism has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 2dfae4a0f7 kernel: demand_paging: allow reserving page frames
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 2117a2a44b kernel: app_smem: allowing pinning memory partitions
This allows memory partitions to be put into the pinned
section so they are available during boot. For example,
the stack guard (in libc partition) is needed during boot
but before the paging mechanism is initialized. Without
pinning it in physical memory, it would fault in early
boot process.

A new cmake property app_smem,pinned_partitions is
introduced so that additional partitions can be pinned
if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung e88afd2c37 kernel: mmu: pin/unpin boot sections during boot process
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung f32ea4433c kernel: demand_paging: clear BSS after paging is initialized
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 7771d27525 kernel: mmu: move when page fault is counted
The beginning of code in do_page_fault() is to pin the page
in memory if it is already present in physical memory.
It is there so that if a page is not present, it can proceed
to perform page-in and then pin it. So the counting of
page faults needs to be moved after the pinning code so
it actually counts page faults, and not counting pinning
operations when the page is already present.

Also clarify the comment on the goto statement as it is not
correct.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung ebbfde9742 kernel: move z_main_stack into pinned section
The z_main_stack is needed before paging mechanism is initialized
so put the stack into the pinned section to avoid page faults.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung c38634fa33 kernel: mmu: fix assigning unaligned addr to page frame
In do_page_fault(), the incoming page fault address is not
aligned, and it was unconditionally assigned to the page
frame virtual address field. If the backing store simply
returns the virtual address without processing in
k_mem_paging_backing_store_location_get(), this unaligned
address will be passed to arch_mem_page_out(). On x86,
it is further passed to range_map() which asserts if
the physical address is not page aligned. So align
the address to page size before assigning it to the page
frame virtual address field.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Flavio Ceolin d9aa414831 kernel: work_q: Add an init function
k_work_queue_start receives a struct that is expected to be
uninitialized (zeroed). Otherwise the behavior is undefined.

Following the Zephyr semantics, this pr introduce a new init function
for this struct.

Fixes #36865

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-08-25 22:07:04 -04:00
Jordan Yates b01e41ccdd device: supported devices visitor API
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.

Implements #37793.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-08-25 19:43:54 -04:00
Daniel Leung 45f549dee9 kernel: allow linking a prebuilt library instead of compiling
This adds a very primitive logic to allow linking a prebuilt
static library of kernel code instead of building the kernel
from source. Note that the library is built with a specific
set of kconfigs, and they must match when building applications,
or else there would be mysterious crashes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-25 18:01:14 -04:00
Naiyuan Tian bc3fda491f kernel: userspace: fix typo in the comments
While reading the code, found some typos in the code comments,
line 226 and 668.
Fix comments to make it more solid.

Signed-off-by: Naiyuan Tian <naiyuan.tian@intel.com>
2021-08-24 07:31:49 -04:00
Maksim Masalski 8ed9cddbc5 lib: add explanation comment in "work->handler" case
Coding scanning tool raises a violation that happens dereferencing
of the "work" pointer in the expression "work->handler"

As were discussed before in PR #35664 it is not true.
Add explanation comment, because static code analysis tool
raised false-positive violation there.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-08-20 18:48:16 -04:00
Fabio Baltieri 7f36f90167 kernel: drop unused priority related definitions
There are no reference for either K_NUM_PRIORITIES or
K_NUM_PRIO_BITMAPS, with the former being dropped in:

  1acd8c2996 kernel: Scheduler rewrite

Dropping both of these, and also the two comments about extra priorities
taking extra RAM space, as those do not seem to apply either.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2021-08-17 17:52:17 -04:00
Fabio Baltieri f88a420d69 toolchain: migrate iterable sections calls to the external API
This migrates all the current iterable section usages to the external
API, dropping the "Z_" prefix:

Z_ITERABLE_SECTION_ROM
Z_ITERABLE_SECTION_ROM_GC_ALLOWED
Z_ITERABLE_SECTION_RAM
Z_ITERABLE_SECTION_RAM_GC_ALLOWED
Z_STRUCT_SECTION_ITERABLE
Z_STRUCT_SECTION_ITERABLE_ALTERNATE
Z_STRUCT_SECTION_FOREACH

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2021-08-12 17:47:04 -04:00
Flavio Ceolin 8eceeee798 pm: device: Add wakeup source API
Introduce a new API to allow devices capable of wake up the system
register themselves was wake up sources. This permits applications to
select the most appropriate way to wake up the system when it is
suspended.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-08-11 19:39:13 -04:00
Evgeniy Paltsev 497cb2e587 CPP: fix static objects init for MWDT toolchain
The constructors of static objects are stored in ".ctors"
section. In case of MWDT toolchain we have incompatible
".ctors" section format with GNU toolchain. So let's use
initialization code provided by MWDT instead of Zephyr one
in case of MWDT toolchain usage.

As it is done for GNU toolchain We call constructors of
static objects but we don't call destructors for them.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-09 22:47:22 -04:00
Gerard Marull-Paretas c2cf1ad203 pm: device: remove usage of local states
The device PM subsystem already holds the device state, so there is no
need to keep duplicates inside the device. The pm_device_state_get has
been refactored to just return the device state. Note that this is still
not safe, but the same applied to the previous implementation. This
problem will be addressed later.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-08-04 08:23:01 -04:00
Flavio Ceolin f2e323f06c futex: Avoid unnecessary lock
It is not necessary the spin lock to protect atomic operation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-07-30 20:21:04 -04:00
Andrew Boie f07df42d49 kernel: make k_current_get() work without syscall
We cache the current thread ID in a thread-local variable
at thread entry, and have k_current_get() return that,
eliminating system call overhead for this API.

DL: changed _current to use z_current_get() as it is
    being used during boot where TLS is not available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Gerard Marull-Paretas 70322853a8 pm: device: move device busy APIs to pm subsystem
The following device busy APIs:

- device_busy_set()
- device_busy_clear()
- device_busy_check()
- device_any_busy_check()

were used for device PM, so they have been moved to the pm subsystem.
This means they are now prefixed with `pm_` and are defined in
`pm/device.h`.

If device PM is not enabled dummy functions are now provided that do
nothing or return `-ENOSYS`, meaning that the functionality is not
available.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-07-30 09:28:42 -04:00
Guennadi Liakhovetski 339a6bdafb kernel: fix several typos in a comment in timeout.c
Fix several simple typos.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-07-23 16:06:54 -04:00
Guennadi Liakhovetski 45b70e1500 smp: limit the scope of some SMP-only functions
z_smp_init() is only available if CONFIG_SMP is defined,
smp_timer_init() also depends on two Kconfig parameters. Also make it
conditional in cavs_timer.c. Also clarify some SMP-related comments
there.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-07-23 16:06:54 -04:00
Daniel Leung dbc0be487f kernel: use proper macro to declare extern interrupt stacks
The z_interrupt_stacks was declared extern in the kernel internal
header file using the same macro which defines the same stack
array but with an added "extern" in front. This macro adds
alignment and section attribute which are actually not the same
as the actual stack array defined in kernel/init.c. The section
name used in the section attribute contains the file name where
the stack array is defined or extern declared. So the same
symbol, in this case z_interrupt_stacks, has different
attributes in two places, and GCC 11 starts to complain about
this. So use the newly introduced macro to extern declare
the stack array without adding/replacing any symbol attributes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-22 07:24:11 -05:00
David Palchak 043507dbd6 kernel: native_posix: Don't run global C++ constructors
On the native_posix board global object constructors
are run by the underlying OS runtime init prior to
Zephyr kernel init. Thus Zephyr should not run global
object constructors a second time. Doing so breaks
application behavior that relies on global
constructors doing work that must be done only once.
See bug #36858 for more information.

Signed-off-by: David Palchak <palchak@google.com>
2021-07-12 19:51:16 -04:00
Chih Hung Yu 0ef77d4ea4 kernel: Fix negative mutex lock_count value
If you try to unlock an unlocked mutex, it will incorrectly
succeeds and decreases the lock count to -1.

Fixes #36572

Signed-off-by: Chih Hung Yu <chyu313@gmail.com>
2021-07-06 19:19:41 -04:00
Anas Nashif 8b3f36c656 kernel: move internal headers into include/kernel
Move 2 headers that are internal to the kernel into include/kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-06-16 20:38:55 -04:00
Jennifer Williams ae85da1c90 kernel: poll: fix coding guideline 15.7 missing comment
The final else {} in the if...else if is missing required
comment (non-empty, ';' is not sufficient). This adds a comment
to comply with CG 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-06-04 16:22:50 -05:00
Maksim Masalski e9600a75fc kernel: thread: add default case, remove unused break
According to the Zephyr Coding Guideline all switch statements
shall be well-formed. Add a default case with break and comment
to avoid static analysis tool to raise a violation that there is no
default case.
Also, I think, in all cases above no need to use "break",
because they already are using "return".

Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:21:01 -05:00
Maksim Masalski 78ba2ec830 coding guidelines: add to function prototypes form named parameters
Function types shall be in prototype form with named parameters

Found as a coding guideline violation (MISRA R8.2) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:20:06 -05:00
Torbjörn Leksell 70d721c1bb Tracing: Incorrect Unlock Mutex Trace Hook Fix
Changed location of the last k_mutex_unlock trace hook since it was
being called after k_sched_unlock, which could result in tracing
scenarios (other thread waiting for lock) where it appeared that a
mutex was being locked again before becoming unlocked.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-06-03 07:10:05 -05:00
Ioannis Glaropoulos dd574f6ec7 kernel: stack_sentinel: disable in single-threaded builds
Add a dependency on MULTITHREADING for the
STACK_SENTINEL feature, so it may not get
enabled in single-thread Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-28 10:41:46 -05:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Lauren Murphy 4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Ioannis Glaropoulos 4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Flavio Ceolin 97281b3862 pm: device_runtime: get rid of the spinlock
Protect critical sections using the mutex.
The mutex is required to use the conditional variable and since we
need to atomically check the pm state and the workqueue before wait
the condition, it is necessary to protect them using the same mutex.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Flavio Ceolin 378a19d2a8 pm: device_runtime: Add helper to wait on async ops
Add a function that properly uses a mutex to check a condition before
wait on the conditional variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Maksim Masalski 970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andrzej Głąbek 59b21a29aa kernel: timeout: Fix adding of an absolute timeout
Correct the way the relative ticks value is calculated for an absolute
timeout. Previously, elapsed() was called twice and the returned value
was first subtracted from and then added to the ticks value. It could
happen that the HW counter value read by elapsed() changed between the
two calls to this function. This caused the test_timeout_abs test case
from the timer_api test suite to occasionally fail, e.g. on certain nRF
platforms.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2021-05-24 23:53:18 -04:00
Andy Ross 851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Maksim Masalski d6c9d40ee0 userspace: remove dead code
File userspace.c contains dead code in function char *otype_to_str()
Remove "return NULL" and replace with "ret = NULL".

Found as a coding guideline violation (MISRA R2.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-24 22:35:03 -04:00
Armando Visconti 4b4068c948 kernel/device: add arg checking in z_device_ready()
If this call receives an invalid device pointer as argument it
assumes that the `device` is not ready for usage.

This routine is currently called by two device specific APIs:

    - device_usable_check(const struct device *dev)
    - device_is_ready(const struct device *dev)

The device-specific APIs documentation claims that these two
routines must be called with a device pointer captured from
DEVICE_DT_GET(). So passing NULL is a violation of the rule.

Nevertheless, is quite common in drivers to assign NULL to
a device pointer if the corresponding DT property has not been
found (e.g. a not used gpio interrupt declaration for a given
device instance) and seems legit to interpret this condition
same as the device is not ready for usage.

Signed-off-by: Armando Visconti <armando.visconti@st.com>
2021-05-18 11:29:46 -05:00
Peter Bigot 656c09589a kernel: work: fix race condition with cancel before work runs
The original state management solution involved separate locks for a
work queue and each work item.  To avoid inter-lock dependencies a
window was left between the point where the work item was removed from
the queue (protected by queue lock) and the point where the work item
state was updated to mark the work item running.

This introduced a bug: If a cancellation was issued during this window
it would succeed, and the work item would appear to be idle even
though in fact the work queue thread was about to run it.

Since there is now only one lock, move the work item state updates
into the mutex regions associated with dequeuing the work item and
clearing the work queue busy flag.

Note that removing the window between queue and work mutex regions
eliminates the potential of having a dequeued work item be cancelled
before its QUEUED flag is cleared, simplifying the work item state
update.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-05-18 15:02:08 +02:00
Maksim Masalski 929956df70 coding guidelines rule 14_3_j: add explicit case check
Violation of the [MISRAC2012-RULE_14_3-j]:
Boolean operations whose results are invariant
shall not be permitted

Probably in that part of code is a misprint.
Added to check _OBJ_INIT_FALSE case explicitly

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-18 08:36:57 -04:00
Anas Nashif 7b9084cb4f ztest: set thread name to test name
Use the actual test name and not a hardcoded thread name. This is useful
for tracing test cases.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-17 18:45:57 -04:00
Andy Ross bd077561d0 kernel/swap: Add assertion to catch lock-breaking context switches
Our z_swap() API takes a key returned from arch_irq_lock() and
releases it atomically with the context switch.  Make sure that the
action of the unlocking is to unmask interrupts globally.  If
interrupts would still be masked then that means there is an OUTER
interrupt lock still held, and the code that locked it surely doesn't
expect the thread to be suspended and interrupts unmasked while it's
held!

Unfortunately, this kind of mistake is very easy to make.  We should
catch that with a simple assertion.  This is essentially a crude
Zephyr equivalent of the extremely common "BUG: scheduling while
atomic" error in Linux drivers (just google it).

The one exception made is the circumstance where a thread has already
aborted itself.  At that stage, whatever upthread lock state might
have existed will have already been messed up, so there's no value in
our asserting here.  We can't catch all bugs, and this can actually
happen in error handling and/or test frameworks.

Fixes #33319

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-17 15:27:37 -04:00
Daniel Leung 216dc5ddfe kernel: mmu: remove un-needed call to virt_to_bitmap_offset
When marking the reserved region at the end of virtual address
space, call virt_to_bitmap_offset() is not needed as we already
know the offset. So remove it.

Coverity-CID: 235930
Fixes #35160

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-13 09:00:54 -05:00
Daniel Leung 660d1478c6 kernel: init.c: tag source for boot/pinned sections
This adds the tags for functions and variables so they
can be put into boot/pinned sections.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 1310ad6b0e linker: add bits for pinned regions
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).

This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung d812728ec4 linker: add bits for boot regions
This adds the necessary bits for linker scripts and source code
to specify which symbols are needed for boot process so they
can be grouped together.

One use of this is to group boot related code and data so these
won't interval with other kernel and application for better
caching.

This is a must for demand paging as some functions and data
must be available during the boot process and before the memory
manager is initialized. During this time, paging cannot be used
so symbols linked in virtual memory space are unavailable.

This is up to the arch/SoC/board to define the sections in
their linker scripts as section may need special alignment
which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Carlo Caione f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Anas Nashif 4d994af032 kernel: remove object tracing
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.

This will allow for support in more tools and less reliance on GDB for
tracing objects.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 7a646b3f8e Tracing: Work Queue tracing
Add Work tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell cae9a905d4 Tracing: Poll API and Work Poll tracing
Add Poll API and Work Poll tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 3a66d6c695 Tracing: Timer tracing
Add Timer tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 65b376eb87 Tracing: Memory Slab tracing
Add memory slab tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 80cd9dac22 Tracing: Memory Heap tracing
Add Memory heap tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fa9e64b304 Tracing: Pipe tracing
Add Pipe tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell d2e7de522d Tracing: Mailbox tracing
Add Mailbox tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 9ab447b3de Tracing: Message Queue tracing
Add Message Queue tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 69e8869127 Tracing: Memory Stack tracing
Add memory stack tracing, defaul trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f984823e0d Tracing: Queue tracing
Add Queue tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell b93ff29e4b Tracing: Conditional variable tracing
Add conditional variable tracing hooks, default tracing hooks,
and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell ed6148a841 Tracing: Mutex tracing hooks
Add mutex trace hooks, default mutex trace hooks, and trace hook
documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 82addd6a64 Tracing: Semaphore tracing documentation
Add default semaphore trace hooks and documentation into
tracing.h.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fcf2fb6320 Tracing: Semaphore tracing
Add semahphore tracing using the new trace macros.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Flavio Ceolin 54324fd08e power: device_pm: Use spin lock instead of semaphore
Device pm runtime was using semaphore to protect critical section but
enable / disable functions were waiting on the semaphore. So, just
replace it with a spin lock.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Flavio Ceolin 8705c688e2 power: device_pm: Fix concurrence issues
The sync API was using k_poll_signal and in certain conditions is
possible multiple threads waiting on a signal leading to an undefined
behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Daniel Leung c31829074f kernel: mmu: use bitarrays for k_mem_map/k_mem_unmap
This uses bitarrays for allocating and deallocating virtual
addresses with k_mem_map() and k_mem_unmap(). This will
allow us to reuse virtual addresses.

Fixes #28900

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung c254c58184 kernel: mmu: add k_mem_unmap
This adds k_mem_unmap() so the memory mapped by k_mem_map()
can be freed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung 085d3768e1 kernel: mmu: introduce arch_page_phys_get()
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.

A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung fe48f5a920 kernel: mmu: always use before/after guard pages for k_mem_map()
When we start allowing unmapping of memory region, there is no
exact way to know if k_mem_map() is called with guard page option
specified or not. So just unconditionally enable guard pages on
both sides of the memory region to hopefully catch access
violations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung e6df25f68c kernel: mmu: implement z_phys_unmap()
This provides a counterpart to z_phys_map() which can be used
to temporary map memory region during boot process, and
subsequently discards the mapping.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Guennadi Liakhovetski d7a3752915 work: remove a statement with no effect
work_timeout() is a function, a statement like "(void)work_timeout;"
has no effect.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-07 12:44:34 -04:00
Gerard Marull-Paretas d31a9be27c pm: device: rename device_pm struct to pm_device
Prefix all PM related functions/structures with pm.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 2c7b763e47 pm: replace DEVICE_PM_* states with PM_DEVICE_*
Prefix device PM states with PM.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 13f528bc59 kernel: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Guennadi Liakhovetski ced7866901 smp: move a preprocessor conditional from .c to cmake
smp.c only has to be built if CONFIG_SMP is enabled. Remove
preprocessor checks from the file itself and update cmake rules
instead.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Guennadi Liakhovetski 8d07b7751a smp: add a Kconfig option to delay booting secondary CPUs
Usually Zephyr boots all secondary CPUs as a part of system
boot. Some applications however need an ability to boot on
the main CPU only and enable secondary CPUs selectively at
run-time. Add a Kconfig option to support this behaviour.
When booting CPUs on demand applications also need helpers
to initialise a dummy thread and begin threaded execution
on those CPUs, add two such helpers.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Anas Nashif 6df4405cca doc: fix typos
Fix various typos in the docs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-30 16:03:08 -04:00
Krzysztof Chruscinski 2165e8c585 Revert "kernel: Deprecate CONFIG_MULTITHREADING"
This reverts commit 28cb9dab64.
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski b85250108c kernel: Limit kernel files when CONFIG_MULTITHREADING=n
Avoid fetching files which use scheduler. By explicitly avoiding
including RTOS specific files we ensure that it is not fetched
accidently.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 1ba23ca92b kernel: fatal: Avoid thread api access when no multithreading
Remove access to thread API when multithreading is off.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski c482a572d4 kernel: heap: Add support for CONFIG_MULTITHREADING=n
Ensure that k_heap is not attempt to block the thread when
timeout is set and space cannot be allocated.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 3b4b7c3a37 kernel: mem_slab: Add support to no multithreading
Mem_slab supports allocation with timeout which blocks the context
if no slab is available. Updated to treat every timeout as K_NO_WAIT
when multithreading is disabled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski dd0715c770 kernel: timer: Adding support to CONFIG_MULTITHREADING=n
Updated timer to not touch thread/scheduler code when multithreading
is off.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 7dcff6ecfe kernel: Move _kernel from sched to init
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski b8fb353cd4 kernel: Move k_busy_wait from thread to timeout
K_busy_wait is the only function from thread.c that is used when
CONFIG_MULTITHREADING=n. Moving to timeout since it fits better there
as it requires sys clock to be present.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Daniel Leung c8177ace3a kernel: work: handler null check is to NULL...
...instead of numeric zero.

Current usage is violation of MISRA rule 11.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:37 -04:00
Daniel Leung 0773441422 kernel: device: return NULL for pointer type
Return NULL instead of return numeric zero for pointer type.

Current usage violates MISRA rule 11.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:37 -04:00
Daniel Leung abfe045fd3 kernel: userspace: rename obj_list in struct dyn_obj
This renames the obj_list element in struct dyn_obj to
dobj_list, to avoid identifier collision with the static
obj_list defined in userspace.c.

Violation of MISRA rule 5.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:11 -04:00
Jennifer Williams 9aa0f212ae kernel: work: fix missing final else
work_queue_main() was missing final else statement
in the if else if construct. This commit adds else {}
to comply with coding guideline 15.7. Includes a
context-specific description of why this branch is empty.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams dc11ffb562 kernel: timeout: fix missing final else
z_timeout_end_calc() was missing final else statement
in the if else if construct. This commit pulls the last
condition into a final else {} to comply with guideline
15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams c00bdcf1a8 kernel: poll: fix missing final else
register_events() and signal_poll_event() missing final
else statement in the if else if construct. This commit adds
else {} to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Gerard Marull-Paretas bfce935caf power: remove device_pm_control_nop function
Devices that do not require PM should just use NULL.
`device_pm_control_nop` is still kept as an alias to NULL untill all
in-tree usage is replaced with NULL.

Code relying on device_pm_control function now returns -ENOTSUP
(equivalent to calling device_pm_control_nop).

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Daniel Leung 1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Peter Bigot 707dc22fb0 kernel: fix error in synchronous work cancellation return value
The return value is documented to be true if the work was pending, but
the implementation returned true only if the work was actually running
(i.e. the caller had to wait).  It should also return true if
scheduled or submitted work was cancelled.

Note that this means the return value cannot be used to determine
whether the call slept.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-04-27 13:28:45 -04:00
Nicolas Pitre f97d12936e kernel: add an architecture specific structs header
Add the ability to define architecture specific structures, notably
the ability to extend struct _cpu with per-CPU arch-specific stuff that
can be accessed with _current_cpu->arch.* similarly to _current->arch.*
for per-thead architecture data.

This is opt-in for architectures that want to benefit from this,
otherwise empty defaults are provided. A placeholder for ARM64 is
included to show the pattern.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-21 09:03:47 -04:00
Peter Bigot f69a38162a kernel: atomic: consistently use named type for atomic pointer values
There's a typedef for non-pointer values compatible with atomic
non-pointer objects.  Add a similar typedef for pointer values, and
the corresponding macro for initializing atomic pointer types.

This also will simplify replacing the Zephyr atomic API with one
based on C11 atomics, should that be desirable.  C11 atomic pointer
values are not void*.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-04-19 15:22:13 +02:00
Nick Graves b445f13462 kernel: Allow k_poll on message queues
This commit adds the ability to use a message queue as a
k_poll object. It follows the same pattern as polling on
FIFOs.

This change has been proven in practice at Samsara.

Fixes: #26728

Signed-off-by: Nick Graves <nicholas.graves@samsara.com>
2021-04-17 07:47:26 -04:00
Nicolas Pitre 2bed37e534 mem_slab: move global lock to per slab lock
This avoids contention between unrelated slabs and allows for
userspace accessible slabs when located in memory partitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-14 14:20:19 -04:00
Flavio Ceolin f6f951cc17 kernel: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Carlo Caione 64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Daniel Leung 09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 1559712b22 timing: add kconfig CONFIG_TIMING_FUNCTIONS_NEED_AT_BOOT
This adds a new kconfig CONFIG_TIMING_FUNCTIONS_NEED_AT_BOOT so
that the timing subsystem can be initialized at boot, instead of
being #ifdef under thread runtime statistics. This will allow
other part of kernel and other subsystems to utilize the timing
functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Anas Nashif 3f4f3f6c43 kernel: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Peter Bigot fed035231f kernel: work: fix schedule from running work
k_work_schedule() is supposed to be a no-op if the work item is
already scheduled or submitted: the previous schedule is left
unchanged.  The check incorrectly inhibited the schedule operation
when the work item was neither scheduled nor submitted, but was
running.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-29 12:27:36 -04:00
Anas Nashif d8f698703b kernel: idle/z_sched_prio_cmp: match implementation to prototype
The identifiers used in the declaration and definition of a function
shall be identical [MISRAC2012-RULE_8_3-b]

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-29 07:52:42 -04:00
Katsuhiro Suzuki 19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00