We don't actually need spinlocks here.
For user_copy(), we are checking that the pointer/size passed in
from user mode represents an area that the thread can read or
write to. Then we do a memcpy into the kernel-side buffer,
which is used from then on. It's OK if another thread scribbles
on the buffer contents during the copy, as we have not yet
begun any examination of its contents yet.
For the z_user_string*_copy() functions, it's also possible
that another thread could scribble on the string contents,
but we do no analysis of the string other than to establish
a length. We just need to ensure that when these functions
exit, the copied string is NULL terminated.
For SMP, the spinlocks are removed as they will not prevent a
thread running on another CPU from changing the buffer/string
contents, we just need to safely deal with that possibility.
For UP, the locks do prevent another thread from stepping
in, but it's better to just safely deal with it rather than
affect the interrupt latency of the system.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
revert commit 3e255e968 which is to adjust stack size
on qemu_x86 platform for coverage test, but break other
platform's CI test.
Fixes: #15379.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
for SDK 0.10.0, it consumes more stack size when coverage
enabled, so adjust stack size to fix stack overflow issue.
Fixes: #15206.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier. Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.
By default all files without license information are under the default
license of Zephyr, which is Apache version 2.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
If a test tries to create a user thread, and the platform
suppors user mode, and CONFIG_TEST_USERSPACE has not been
enabled, fail an assertion.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.
As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.
The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269Fixes: #14766
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
for SDK 0.10.0, it consumes more stack size when coverage enabled
on qemu_x86 and mps2_an385 platform, adjust stack size for most of
the test cases, otherwise there will be stack overflow.
Fixes: #14500.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
For systems without userspace enabled, these work the same
as a k_mutex.
For systems with userspace, the sys_mutex may exist in user
memory. It is still tracked as a kernel object, but has an
underlying k_mutex that is looked up in the kernel object
table.
Future enhancements will optimize sys_mutex to not require
syscalls for uncontended sys_mutexes, using atomic ops
instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Checking the stack sentinel may abort the current thread,
make this check before we determine what the next thread
to run is.
Fixes: #15037
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There are some remaining code from object monitoring which simply
expands to empty loop macros. Remove them as they are not
functional anyway.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Clarify the warning in the help for CONFIG_MULTITHREADING to make it
clear that many things will break if this is set to 'n'.
Signed-off-by: David Brown <david.brown@linaro.org>
Controlling expression of if and iteration statements must have a
boolean type.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.
MISRA-C rule 10.1
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
k_busy_wait() does not work when multithreading is disabled, so do not
try to wait during boot.
Fixes#14454
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This function was returning an essentially boolean value. Just changing
the signature to return a bool.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
BIT macro uses an unsigned int avoiding implementation-defiend behavior
when shifting signed types.
MISRA-C rule 10.1
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Add SYS_POWER_ prefix to HAS_STATE_SLEEP_, HAS_STATE_DEEP_SLEEP_
options to align them with names of power states they control.
Following is a detailed list of string replacements used:
s/HAS_STATE_SLEEP_(\d)/HAS_SYS_POWER_STATE_SLEEP_$1/
s/HAS_STATE_DEEP_SLEEP_(\d)/HAS_SYS_POWER_STATE_DEEP_SLEEP_$1/
Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
This commit cleans up names of system power management functions by
assuring that:
- all functions start with 'sys_pm_' prefix
- API functions which should not be exposed to the user start with '_'
- name of the function hints at its purpose
Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
There exists SoCs, e.g. STM32L4, where one of the low power modes
reduces CPU frequency and supply voltage but does not stop the CPU. Such
power modes are currently not supported by Zephyr.
To facilitate adding support for such class of power modes in the future
and to ensure the naming convention makes it clear that the currently
supported power modes stop the CPU this commit renames Low Power States
to Slep States and updates the documentation.
Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
On SMP, there was a bug where the logic that re-adds _current to the
run queue at swap time would accidentally reschedule threads that had
just gone to sleep, because the is_thread_prevented_from_running()
predicate only tests for threads that are "suspended" or "pending" and
not sleeping.
Overload _THREAD_SUSPENDED to indicate "sleeping" also. Simple fix
for an immediate bug, though long term we really want to unify all the
blocked conditions to prevent this kind of state bug.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible. That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).
But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.
Long term, we should probably remove this macro, it doesn't provide
much value. But removing it in response to a static analysis failure
is... not very responsible as a development practice.
Fixes#14816Fixes#14820
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The pointer arithmetic used didn't account for ARC
supervisor mode stacks, which are allocated at the
end of the stack object. Use the new macro to know
exactly how much space is reserved.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Daniel Leung caught a good one: In the (SMP) case where we were
aborting a thread that was not currently scheduled, we were flagging
the DEAD state on _current and not the thread we were aborting! This
wasn't as fatal as it seems, as the thread that called z_sched_abort()
would effectively go on living (as a zombie?) in a state where it
would always be preempted, but would otherwise remain scheduleable.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Most CPUs have instructions like LOCK, LDREX/STREX, etc which
allows for atomic operations without locking interrupts that
can be invoked from user mode without complication. They typically
use compiler builtin atomic operations, or custom assembly
to implement them.
However, some CPUs may lack these kinds of instructions, such
as Cortex-M0 or some ARC. They use these C-based atomic
operation implementations instead. Unfortunately these require
grabbing a spinlock to ensure proper concurrency with other
threads and ISRs. Hence, they will trigger an exception when
called from user mode.
For these platforms, which support user mode but not atomic
operation instructions, the atomic API has been exposed as
system calls.
Some of the implementations in atomic_c.c which can be instead
expressed in terms of other atomic operations have been removed.
The kernel test of atomic operations now runs in user mode to
prove that this works.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The workaround for nonatomic swap had yet another edge case: it would
save off the _current pointer when pending a thread so that the next
time slice interrupt could test it to see if the swap had actually
happened before assuming that _current could be rescheduled (if it
just pended itself, that's impossible). Then it would clear the
pending_current pointer so future interrupts wouldn't be confused.
BUT: it turns out that qemu, when faced with really rapid timer rates
that exceed its (host-based) timing accuracy, is perfectly willing to
"stack up" timer interrupts such the one goes pending before the
previous one is finished executing. In that case, we can enter the
SECOND timer interrupt, to try timeslicing a SECOND time, STILL before
the PendSV exception has run to actually effect the context switch.
Except this time pending_current has been cleared and we try to
reschedule the pended _current thread incorrectly. In theory real
hardware could do this too, though it would involve absolutely crazy
interrupt latency problems.
Work around this by moving the clear to the thread itself, immediately
after it wakes up from the pend call it retakes a lock and clears
pending_current if it still matches _current. That is not a perfect
fix: there remains a 2-3 instruction race at that moment where we
return from pend and before we can lock interrupts again where a timer
interrupt will see an incorrect pointer. But I hammered at this and
couldn't make qemu do that (i.e. return from a timer interrupt but
flag a new one in just a cycle or two).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing device_set_power_state() API works only in synchronous
mode and this is not desirable for devices(ex: Gyro) which take
longer time (few 100 mSec) to suspend/resume.
To support async mode, a new callback argument is added to the API.
The device drivers can asynchronously suspend/resume and call the
callback function upon completion of the async request.
This commit adds the missing callback parameter to all the drivers
to make it compliant with the new API.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This was always doing a remove/add of the _current thread to the run
queue, which is wrong because in SMP _current isn't in the queue to
remove. But it went undetected until the recent dlist changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In SMP, we are setting the _current pointer while holding the
scheduler spinlock locally, which means that when we try to release it
the validation layer (not the spinlock per se) will scream at us
because the thread that took the lock doesn't match the one releasing
it.
Special case this when validation is enabled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing. Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The tracing fixes in commit e87193896a ("subsys: debug: tracing: Fix
thread tracing") were... not a readability win. The point appears to
have been to put a tracing hook immediately before and after the
assignment to the _current pointer. So do that in an abstracted
function and clean up _get_next_switch_handle() (which is a subtle and
important function already polluted with some unavoidable preprocessor
testing!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue. This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.
On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur. This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!
On non-SMP architectures this patch changes no code paths at all.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Tickless timers mean that k_busy_wait() won't work until after the
timer driver is initialized, which is very early but not as early as
SMP. No need for it, just spin.
Also the original code used a stack variable for the start flag, which
racily presumed that _arch_start_cpu() would comes back synchronously
with the other CPU fired up and running right now. The cleaned up smp
bringup API on x86_64 isn't so perky, so it exposed the bug. The flag
just needs to be static.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In SMP, _current is not "queued". (The run queue only stores
unscheduled threads because we can't rely on the head of the list
being _current). We weren't updating the cache choice, which would
flag swap_ok, so calling k_thread_abort(_current) (for example, when a
thread exits from its entry function) would try to switch back into
the thread and then run off the end of the function.
Amusingly this was more benign than you'd think. Stumbled on it by
accident.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When CONFIG_SMP is enabled but CONFIG_MP_NUM_CPUS is 1 (which is a
legal configuration, though a weird one) this static function ends up
being defined but unused, producing a compiler warning.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).
This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more. As it happens, that was needless. The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion. So go back to a global lock to
preserve the original behavior.
Fixes#14104
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>