If a chosen entry exists for a memory area of type OCM, zero the OCM
memory's bss section at boot-time.
Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.
Fixes#41422
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
So that logging and "satellite" subsystems, such as tracing and object
tracking can count on kernel structs being properly initialised, such
as `_current_cpu`.
Fixes#42061.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Updates sched_cpu_update_usage() such that the CPU runtime stats
only update the its non-idle time when the current thread is not the
idle thread. This is necessary as otherwise the CPUs idle-time will
be double counted in k_thread_runtime_stats.execution_cycles.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Introduce a hidden kconfig CONFIG_KERNEL_VM_SUPPORT which
enables some kconfigs that are required for virtual memory
support. CONFIG_KERNEL_VM_BASE, CONFIG_KERNEL_VM_OFFSET,
and CONFIG_KERNEL_VM_SIZE are moved under this new kconfig
so these can be enabled independent of CONFIG_MMU.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This moves CONFIG_MMU and its children from arch/Kconfig into
kernel/Kconfig. These are used to enable kernel support of MMU
so they should be under kernel/.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is no need to use conditional compilation for the function
prototypes in the kernel architecture header file. So remove it.
Added bouns is that these functions can appear in documentation
without explicitly enabled in pre-defines during doc build.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Commit 3457118 changed the sequence of z_smp_thread_init() and
smp_timer_init() in smp_init_top() subroutine, which initializes
other BPs. In some boards (up_squared, acrn_ehl_crb) it will fail to
work while SMP enabling. If the timer interrupt is enabled before the
first thread was initialized. Now change back to its original order.
Fixes#41835
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This has bitrotten a bit. Early implementations had a synchronous
arch_start_cpu(), but then we started allowing that to be an async
operation. But that means that CPU start now becomes surprisingly
reentrant to the arch layer (cpu 0 can get a call to start cpu 2 while
cpu 1's initialization code is still running). That's just error
prone; we never documented the requirements cleanly (the window is
very small, but not so small to a slow simulator!).
Add an extra flag so we don't issue the next start until the last is
out of the arch layer and running in smp_init_top().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adds two routines to flush pipe objects:
k_pipe_flush()
- This routine flushes the entire pipe. That includes both
the pipe's buffer and all pended writers. It is equivalent
to reading everything into a giant temporary buffer which
is then discarded.
k_pipe_buffer_flush()
- This routine flushes only the pipe's buffer (if it exists).
It is equivalent to reading a maximum of "buffer size" bytes
into a temporary buffer which is then discarded.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Fixes a race condition in the k_pipe_cleanup() routine by adding
a spinlock. Additionally, internal counters are now reset after
freeing the buffer as the pipe has now become a bufferless pipe.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Resolves void pointer arithmetic build warnings in k_pipe_put() by
casting the pointer to a uint8_t pointer.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When the new Kconfig option CONFIG_SCHED_THREAD_USAGE_ANALYSIS
is enabled, additional timing stats are collected during context
switches. This extra information allows a developer to obtain the
the current, longest, average and total lengths of the time that
a thread has been scheduled to execute.
A developer can in turn use this information to tune their app and/or
alter their scheduling policies.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.
Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The functionality provided by device_usable_check is already provided by
device_is_ready. The (z_)device_usable_check APIs have been
re-implemented using the (z_)device_is_ready APIs and have been marked
as deprecated.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Instead of using device_usable_check() syscall, implement a new syscall
for device_is_ready that uses z_device_is_ready underneath.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename z_device_ready to z_device_is_ready. Function name suggests a
boolean result this way, in line with other functions (e.g.
device_is_ready).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The resource pool of the short-lived dummy thread "stub" may be
inherited by other threads created during system initialization. This
commit initializes this resource pool to NULL or the system pool to
ensure that a well-defined resource pool propagates to other threads
that inherit it from the dummy thread.
Fixes#41482.
Signed-off-by: Berend Ozceri <berend@recogni.com>
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.
Signed-off-by: Jeremy Bettis <jbettis@google.com>
Previous commit 55350a93e9 fixing
address-of-packed-mem warnings uncovered an issue with
the alignment of dynamic kernel objects. On 64-bit platforms,
the alignment is 16 bytes instead of 4/8 bytes (as in pointer,
void *). This changes the function of mapping between kernel
object types and alignments to use the dynamic object struct
as basis for alignment instead of simply using pointers.
This also uncomments the assertion added in the previous commit
55350a93e9 so that we can keep
an eye on the alignment in the future. Note that the assertion
is moved after checking if the incoming kernel object is
dynamically allocated. Static kernel objects are not subjected
to this alignment requirement.
Fixes#41062
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Applies the 'static' keyword to the following inlined routines:
z_priq_dumb_add()
z_priq_mq_add()
z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Removed unused functions, or moved inside #ifdefs.
This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.
Signed-off-by: Jeremy Bettis <jbettis@google.com>
The warning below appears once -Waddress-of-packed-mem is enabled:
/home/carles/src/zephyr/zephyr/kernel/userspace.c: In function
'unref_check':
/home/carles/src/zephyr/zephyr/kernel/userspace.c:471:28: warning:
converting a packed 'struct z_object' pointer (alignment 4) to a 'struct
dyn_obj' pointer (alignment 16) may result in an unaligned pointer value
[-Waddress-of-packed-mem
ber]
471 | CONTAINER_OF(ko, struct dyn_obj, kobj);
To avoid the warning, use an intermediate void * variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:
/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
383 | sys_slist_append(&free_page_frame_list, &pf->node);
This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.
Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Removing the 'U' to avoid the type of num_events changed.
And make sure it is meaningful Z_SYSCALL_VERIFY micro.
Fixed#40614
Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.
Implements #37793.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This updates k_mem_domain_add_thread() to return errors so
the application has a chance to recover.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This changes both k_mem_domain_add_partition() and
k_mem_domain_remove_partition() to return errors instead of
asserting when errors are encountered. This gives the application
chance to recover.
The arch_mem_domain_parition_add()/_remove() will be modified
later together with all the other arch_mem_domain_*() changes
since the architecture code for partition addition and removal
functions usually cannot be separately changed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This changes k_mem_domain_init() to return error values
instead of asserting when errors are encountered.
This gives applications a chance to recover if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Remove LOG_MINIMAL kconfig option which was confusing
since LOG_MODE_MINIMAL existed. LOG_MINIMAL was used to
force minimal mode but because of invalid dependencies
it was leading to issues.
Refactored code to use LOG_MODE_MINIMAL everywhere and
renamed LOG_MINIMAL to LOG_DEFAULT_MINIMAL which has impact
on defualt logging mode (which still can be later changed
in conf file or in menuconfig).
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Added heap reference parameter to k_free tracing
hook to allow tracing of the pointer which was
passed as a parameter to a k_free call.
As part of this update the defines
(for this hook) in the various tracing formats
was also updated.
Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
With `gen_handles.py` now running on the first pre-built image,
`zephyr_pre0.elf` there is no requirement for the device handle arrays
to remain the same size after processing.
Remove the padding generated in `gen_handles.py`, as well as the
temporary option `CONFIG_DEVICE_HANDLE_PADDING` which was added to work
around this issue.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.
Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.
Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads. The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Clean up RUNTIME_STATS to separate the API from the individual data
backends. Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles. Move the kconfig for that into the runtime
stats menu, since it's part of the family now.
Also remove a lot of needless #if's around the declarations. Unused
structs and uncalled functions don't need to be explicitly hidden. An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.
(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines. It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC. Those days are behind
us and "cycles" can be fast everywhere.)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.
This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight. And
eventually it will go away as these architectures migrate.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add spinlock unlocking before calling timer expiration
handler. Locking was introduced by dde3d6c.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation. The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released. But that lock is supposed to be
atomic with the thread suspend.
This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.
Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die. At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.
Fixes#39575
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.
The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The k_work::flags field is not an atomic_t and would cause
-Wpointer-sign warning on some compilers. This function was the only
one in work.c to use atomic_get() so there is no benefit to atomicity.
Signed-off-by: Chris Reed <chris.reed@arm.com>
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.
And a huge mapping will consume all possible L3 tables.
In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.
This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
up to the possible aligned virtual address
- freeing the supplementary pages used for alignment
Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
When CONFIG_USERSPACE is enabled, the ELF file from linker pass 1 is
used to create a hash table that identifies kernel objects by address.
We therefore can't allow the size of any object in the pass 2 ELF to
change in a way that would change those addresses, or we would create
a garbage hash table.
Simultaneously (and regardless of CONFIG_USERSPACE's value),
gen_handles.py must transform arrays of handles from their pass 1
values to their pass 2 values; see the file's docstring for more
details on that transformation.
The way this works is that gen_handles.py just pads out each pass 2
array so its length is the same as its pass 1 value. The padding value
is a repeated run of DEVICE_HANDLE_ENDS values. This value is the
terminator which we look for at runtime in places like
device_required_handles_get(), so there must be at least one, and we
error out in gen_handles.py if there's no room in the pass 2 array for
at least one such value. (If there is extra room, we just keep
inserting extra DEVICE_HANDLE_ENDS values to pad the array to its
original length.)
However, it is possible that a device has more direct dependencies in
the pass 2 handles array than its corresponding devicetree node had in
the pass 1 array. When this happens, users have no recourse, so that's
a potential showstopper.
To work around this possibility for now, add a new config option,
CONFIG_DEVICE_HANDLE_PADDING, whose value defaults to 0.
When nonzero, it is a count of padding handles that are inserted into
each device handles array. When gen_handles.py errors out due to lack
of room, its error message now tells the user how much to increase
CONFIG_DEVICE_HANDLE_PADDING by to work around the problem.
It looks like a real fix for this is to allocate kernel objects whose
addresses are required for hash tables in CONFIG_USERSPACE=y
configurations *before* the handle arrays. The handle arrays could
then be resized as needed in pass 2, which saves ROM by avoiding
unnecessary padding, and would avoid the need for
CONFIG_DEVICE_HANDLE_PADDING altogether.
However, this 'real fix' is not available and we are facing a deadline
to get a temporary solution in for Zephyr v2.7.0, so this is a good
enough workaround for now.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
This reverts commit b01e41ccdd.
It's not clear that the supported devices are being properly computed,
so let's revert this for v2.7.0 until we've had more time to think
it through.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
before running timer's timeout function, we need to make
sure that those threads waiting on this timer have been
added into the timer's wait queue, so add operations to
use timer lock to mask interrupts in z_timer_expiration_handler
function to synchronize timer's wait queue.
Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.
In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one. This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread. It's a mild optimization, but a basically simple
one.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object. Pure refactoring.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).
Refactor into a simpler API that places the access to the run queue in
just a single spot.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Pure refactoring. For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adding missing parenthesis. Without them wrong results
appeared when k_cycle_get_32 wrapped.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.
Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.
Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some architectures already returns -ENOTSUP when these functions
are called. So add this return value to the API doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a SOC API to allow for application control over deep idle power
states. Note that the hardware idle entry happens out of the WAITI
instruction, so the application has to be responsibile for ensuring
the CPU to be halted actually reaches idle deterministically. Lots of
warnings in the docs to this effect.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit removes the `timeout_q` from the `struct z_kernel` since it
is no longer used.
Note that the new kernel timeout implementation introduced in the
commit 987c0e5fc1 uses `timeout_list`
global variable in place of it.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
To support arm-ds / armlink it is required that the weak main is located
in an object externally to the object using the weak symbol.
If the weak symbol is inside the object referring to it, then the weak
symbol will be used and this will result in
```
Error: L6200E: Symbol __ARM_use_no_argv multiply defined
(by init.o and main.o).
```
as both the weak and strong symbols are used.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for __itcm_load_start and
__dtcm_data_load_start to other symbols and in such a way they follow
consistent pattern which allows for linker script and scatter file
generation.
The symbols are named according to the section name they describe.
Section names are itcm and dtcm.
The following symbols are aligned in this commit:
- __itcm_rom_start -> __itcm_load_start
- __dtcm_data_rom_start -> __dtcm_data_load_start
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.
The symbols are named according to the section name they describe.
Section name is `ramfunc`
The following symbols are aligned in this commit:
- _ramfunc_ram_start -> __ramfunc_start
- _ramfunc_ram_end -> __ramfunc_end
- _ramfunc_ram_size -> __ramfunc_size
- _ramfunc_rom_start -> __ramfunc_load_start
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.
Generally, start and end symbols uses the following pattern, as:
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name: foo
Section start symbol: __foo_start
Section end symbol: __foo_end
Section size symbol: __foo_size
Section LMA start symbol: __foo_load_start
This commit aligns the symbols for _data_ram/rom to other symbols and in
such a way they follow consistent pattern which allows for linker script
and scatter file generation.
The symbols are named according to the section name they describe.
Section name is `data`
A new group named data_region is introduced which instead spans all the
input and output sections that was previously covered by
__data_ram_start, __data_ram_end, and __data_rom_start.
The following symbols are aligned in this commit:
- __data_ram_start -> __data_region_start
- __data_ram_end -> __data_region_end
- __data_rom_start -> __data_region_load_start
The following new symbols are introduced so that the data section is
aligned with other sections:
- __data_end
- __data_start
value identical to __data_region_start but describes start of
the section.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
The initialization of the struct pm_device pm field found in the device
state can be statically initialized without the need of doing it at
runtime.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
With demand paging, the heap object and its backing memory
may not be in physical memory. So initialize those heaps
in pinned region at PRE_KERNEL_1 and the remaining heaps
once paging mechanism has been initialized.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows memory partitions to be put into the pinned
section so they are available during boot. For example,
the stack guard (in libc partition) is needed during boot
but before the paging mechanism is initialized. Without
pinning it in physical memory, it would fault in early
boot process.
A new cmake property app_smem,pinned_partitions is
introduced so that additional partitions can be pinned
if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The beginning of code in do_page_fault() is to pin the page
in memory if it is already present in physical memory.
It is there so that if a page is not present, it can proceed
to perform page-in and then pin it. So the counting of
page faults needs to be moved after the pinning code so
it actually counts page faults, and not counting pinning
operations when the page is already present.
Also clarify the comment on the goto statement as it is not
correct.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The z_main_stack is needed before paging mechanism is initialized
so put the stack into the pinned section to avoid page faults.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In do_page_fault(), the incoming page fault address is not
aligned, and it was unconditionally assigned to the page
frame virtual address field. If the backing store simply
returns the virtual address without processing in
k_mem_paging_backing_store_location_get(), this unaligned
address will be passed to arch_mem_page_out(). On x86,
it is further passed to range_map() which asserts if
the physical address is not page aligned. So align
the address to page size before assigning it to the page
frame virtual address field.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
k_work_queue_start receives a struct that is expected to be
uninitialized (zeroed). Otherwise the behavior is undefined.
Following the Zephyr semantics, this pr introduce a new init function
for this struct.
Fixes#36865
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.
Implements #37793.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This adds a very primitive logic to allow linking a prebuilt
static library of kernel code instead of building the kernel
from source. Note that the library is built with a specific
set of kconfigs, and they must match when building applications,
or else there would be mysterious crashes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
While reading the code, found some typos in the code comments,
line 226 and 668.
Fix comments to make it more solid.
Signed-off-by: Naiyuan Tian <naiyuan.tian@intel.com>
Coding scanning tool raises a violation that happens dereferencing
of the "work" pointer in the expression "work->handler"
As were discussed before in PR #35664 it is not true.
Add explanation comment, because static code analysis tool
raised false-positive violation there.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
There are no reference for either K_NUM_PRIORITIES or
K_NUM_PRIO_BITMAPS, with the former being dropped in:
1acd8c2996 kernel: Scheduler rewrite
Dropping both of these, and also the two comments about extra priorities
taking extra RAM space, as those do not seem to apply either.
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
This migrates all the current iterable section usages to the external
API, dropping the "Z_" prefix:
Z_ITERABLE_SECTION_ROM
Z_ITERABLE_SECTION_ROM_GC_ALLOWED
Z_ITERABLE_SECTION_RAM
Z_ITERABLE_SECTION_RAM_GC_ALLOWED
Z_STRUCT_SECTION_ITERABLE
Z_STRUCT_SECTION_ITERABLE_ALTERNATE
Z_STRUCT_SECTION_FOREACH
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
Introduce a new API to allow devices capable of wake up the system
register themselves was wake up sources. This permits applications to
select the most appropriate way to wake up the system when it is
suspended.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The constructors of static objects are stored in ".ctors"
section. In case of MWDT toolchain we have incompatible
".ctors" section format with GNU toolchain. So let's use
initialization code provided by MWDT instead of Zephyr one
in case of MWDT toolchain usage.
As it is done for GNU toolchain We call constructors of
static objects but we don't call destructors for them.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
The device PM subsystem already holds the device state, so there is no
need to keep duplicates inside the device. The pm_device_state_get has
been refactored to just return the device state. Note that this is still
not safe, but the same applied to the previous implementation. This
problem will be addressed later.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>