Use thread wherever it makes sense, using 't' in some places can get
confused with 't' used for timeouts for example.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move thread monitor related functions, not enabled in most cases outside
of thread.c and cleanup headers.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move out of thread and put directly in init.c where it is being used.
Also remove definition from kernel.h, this is an internal function and
should not be in a public header.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The functions to manipulate the essential flag indeed operate on
threads, but they are misplaced in the thread implementation file. Put
them alongside other routines setting other thread flags and cleanup
headers a bit.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move spin_lock validation outside of thread.c into own file. This code
really has nothing to do with the thread code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
When allowing dynamic thread stack allocation the stack may come from
the heap in coherent memory, trying to use cached memory is over
complicated because of heap meta data and cache line sizes.
Also when userspace is enabled, stacks have to be page aligned and the
address of the stack is used to track kernel objects.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Traditionally, k_thread_create() has required that the application
size the stack correctly. Zephyr doesn't detect or return errors and
treats stack overflow as an application bug (though obviously some
architectures have runtime features to trap on overflows).
At this one spot though, it's possible for the kernel to adjust the
stack for K_THREAD_STACK_RESERVED in such a way that the arch layer's
own stack initialization overflows. That failure can be seen by
static analysis, so we can't just sweep it under the rug as an
application failure.
Unfortunately there aren't any good options for handling it here (no
way to return failure, can't be a build assert as the size is a
runtime argument). A panic will have to do.
Fixes: #67106Fixes: #65584
Signed-off-by: Andy Ross <andyross@google.com>
Export some symbols for loadable modules. Also add an
EXPORT_SYSCALL() helper macro for exporting system calls by their
official names.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
The halt queue will be used to identify threads that are waiting
for a thread on another CPU to finish suspending.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_thread_name_get has inconsistent signature. In the function
declaration it uses k_tid_t but in the implementation it is using
struct k_thread *. Change the implementation to use k_tid_t.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Rename z_early_boot_rand_get with z_early_rand_get to get consistent
with other early functions.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Platforms that determine their basic timer frequency at runtime instead of
build time cannot compute thread initialization timeouts during
compilation.
Switch back to storing the init_delay value in milliseconds and perform the
conversion to a k_timeout_t at runtime.
Signed-off-by: Keith Packard <keithp@keithp.com>
rand32.h does not make much sense, since the random subsystem
provides more APIs than just getting a random 32 bits value.
Rename it to random.h and get consistently with other
subsystems.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Integrates object core statistics framework into the following
kernel objects:
sys_mem_blocks, k_mem_slab
threads, _cpu, z_kernel
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Storing this value in milliseconds rather than using k_timeout_t requires
the system to perform division at runtime to convert types. This pulls in
the 64-bit soft division code on platforms without hardware for this.
Perform the conversion at build time instead by using the runtime time
directly.
The init_delay field was moved within the _static_thread_data structure to
avoid introducing a hole for alignment on 32-bit systems when using 64-bit
timeouts.
Use SYS_TIMEOUT_MS instead of K_MSEC so that the initial delay can be set
to forever.
Signed-off-by: Keith Packard <keithp@keithp.com>
This header does not expose any public APIs, so move it under
kernel/include and change files including it.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
The dummy thread doesn't include a TLS area, so any thread local variables
will fail to work if used in the switched_out tracing hook. Skip the hook
in this case, as it's not really accurate anyways; the dummy thread is
only used to set context for the initial switch for each core.
Signed-off-by: Keith Packard <keithp@keithp.com>
Using char pointers for %p should be avoided in log messages. It will
cause issues in configurations where logging strings are removed from
the binary and they are not inspected when cbprintf packages from
logging string are built. In that case any char pointers are treated as
strings and copied into the pacakge body.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
The function k_thread_runtime_stats_all_get() now populates the
current_cycles field in the thread runtime stats structure.
Resets the number of cycles in the CPU's current usage window once
the idle thread is scheduled.
Fixes the average_cycles calcuation.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When threads are in more than one state at a time, k_thread_state_str()
returns a string that lists each of its states delimited by a '+'.
This in turn necessitates a change to the API that includes both a
pointer to the buffer to use for the string and the size of the buffer.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...
This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The x86 and xtensa implementations of irq_offload() invoke synchronous
interrupts on the local CPU, and are therefore safe to use from within
an interrupt context. This is a cheap and portable way to exercise
nested interrupts, which are otherwise highly platform-dependent to
test. Add a kconfig to signal the capability.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Extracting stack usage calculation from k_thread_stack_space_get to
z_stack_space_get so it can be used also for interrupt stack.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.
Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.
Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads. The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Clean up RUNTIME_STATS to separate the API from the individual data
backends. Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles. Move the kconfig for that into the runtime
stats menu, since it's part of the family now.
Also remove a lot of needless #if's around the declarations. Unused
structs and uncalled functions don't need to be explicitly hidden. An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.
This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight. And
eventually it will go away as these architectures migrate.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>