Commit graph

1811 commits

Author SHA1 Message Date
Andrew Boie 09dc929d41 userspace: fix copy from user locking
We don't actually need spinlocks here.

For user_copy(), we are checking that the pointer/size passed in
from user mode represents an area that the thread can read or
write to. Then we do a memcpy into the kernel-side buffer,
which is used from then on. It's OK if another thread scribbles
on the buffer contents during the copy, as we have not yet
begun any examination of its contents yet.

For the z_user_string*_copy() functions, it's also possible
that another thread could scribble on the string contents,
but we do no analysis of the string other than to establish
a length. We just need to ensure that when these functions
exit, the copied string is NULL terminated.

For SMP, the spinlocks are removed as they will not prevent a
thread running on another CPU from changing the buffer/string
contents, we just need to safely deal with that possibility.

For UP, the locks do prevent another thread from stepping
in, but it's better to just safely deal with it rather than
affect the interrupt latency of the system.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-18 17:13:08 -04:00
Flavio Ceolin 4f99a38b06 arch: all: Remove not used struct _caller_saved
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Flavio Ceolin d61c679d43 arch: all: Remove legacy code
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Wentong Wu 8646a8e4f5 tests/kernel/mem_protect/stackprot: stack size adjust
revert commit 3e255e968 which is to adjust stack size
on qemu_x86 platform for coverage test, but break other
platform's CI test.

Fixes: #15379.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-12 10:06:43 -04:00
Wentong Wu 3e255e968a tests: adjust stack size for qemu_x86's coverage test
for SDK 0.10.0, it consumes more stack size when coverage
enabled, so adjust stack size to fix stack overflow issue.

Fixes: #15206.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-11 17:59:39 -04:00
Anas Nashif 3ae52624ff license: cleanup: add SPDX Apache-2.0 license identifier
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier.  Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.

By default all files without license information are under the default
license of Zephyr, which is Apache version 2.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-04-07 08:45:22 -04:00
Andrew Boie 9f04c7411d kernel: enforce usage of CONFIG_TEST_USERSPACE
If a test tries to create a user thread, and the platform
suppors user mode, and CONFIG_TEST_USERSPACE has not been
enabled, fail an assertion.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-06 14:30:42 -04:00
Andrew Boie 4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Wentong Wu b991962a2e tests: adjust stack size for qemu_x86 and mps2_an385's coverage test
for SDK 0.10.0, it consumes more stack size when coverage enabled
on qemu_x86 and mps2_an385 platform, adjust stack size for most of
the test cases, otherwise there will be stack overflow.

Fixes: #14500.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-04-04 08:23:13 -04:00
Patrik Flykt 7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie f0835674a3 lib: os: add sys_mutex data type
For systems without userspace enabled, these work the same
as a k_mutex.

For systems with userspace, the sys_mutex may exist in user
memory. It is still tracked as a kernel object, but has an
underlying k_mutex that is looked up in the kernel object
table.

Future enhancements will optimize sys_mutex to not require
syscalls for uncontended sys_mutexes, using atomic ops
instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Andrew Boie 1dc6612d50 userspace: do not track net_context as a kobject
The socket APIs no longer deal with direct net context
pointers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Andrew Boie 526807c33b userspace: add const qualifiers to user copy fns
The source data is never modified.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:21:16 -04:00
Andrew Boie ae0d1b2b79 kernel: sched: move stack sentinel check earlier
Checking the stack sentinel may abort the current thread,
make this check before we determine what the next thread
to run is.

Fixes: #15037

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:13:40 -04:00
Patrik Flykt 21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Daniel Leung 416d94cd30 kernel/mutex: remove object monitoring empty loop macros
There are some remaining code from object monitoring which simply
expands to empty loop macros. Remove them as they are not
functional anyway.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-03-28 08:55:12 -05:00
David Brown 22029275d3 kernel: Clarify warning about no multithreading
Clarify the warning in the help for CONFIG_MULTITHREADING to make it
clear that many things will break if this is set to 'n'.

Signed-off-by: David Brown <david.brown@linaro.org>
2019-03-28 09:49:59 -04:00
Flavio Ceolin 2df02cc8db kernel: Make if/iteration evaluate boolean operands
Controlling expression of if and iteration statements must have a
boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin 625ac2e79f spinlock: Change function signature to return bool
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Anas Nashif 42f4538e40 kernel: do not use k_busy_wait when on single thread
k_busy_wait() does not work when multithreading is disabled, so do not
try to wait during boot.

Fixes #14454

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-03-26 20:09:07 -04:00
Flavio Ceolin abf27d57a3 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Flavio Ceolin 2ecc7cfa55 kernel: Make _is_thread_prevented_from_running return a bool
This function was returning an essentially boolean value. Just changing
the signature to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Flavio Ceolin a996203739 kernel: Use macro BIT for shift operations
BIT macro uses an unsigned int avoiding implementation-defiend behavior
when shifting signed types.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Piotr Mienkowski a3082e49a1 power: modify HAS_STATE_SLEEP_ Kconfig options
Add SYS_POWER_ prefix to HAS_STATE_SLEEP_, HAS_STATE_DEEP_SLEEP_
options to align them with names of power states they control.
Following is a detailed list of string replacements used:
s/HAS_STATE_SLEEP_(\d)/HAS_SYS_POWER_STATE_SLEEP_$1/
s/HAS_STATE_DEEP_SLEEP_(\d)/HAS_SYS_POWER_STATE_DEEP_SLEEP_$1/

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Piotr Mienkowski 17b08ceca5 power: clean up system power managment function names
This commit cleans up names of system power management functions by
assuring that:
- all functions start with 'sys_pm_' prefix
- API functions which should not be exposed to the user start with '_'
- name of the function hints at its purpose

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Piotr Mienkowski 204311d004 power: rename Low Power States to Sleep States
There exists SoCs, e.g. STM32L4, where one of the low power modes
reduces CPU frequency and supply voltage but does not stop the CPU. Such
power modes are currently not supported by Zephyr.

To facilitate adding support for such class of power modes in the future
and to ensure the naming convention makes it clear that the currently
supported power modes stop the CPU this commit renames Low Power States
to Slep States and updates the documentation.

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Andy Ross 4521e0c111 kernel/sched: Mark sleeping threads suspended
On SMP, there was a bug where the logic that re-adds _current to the
run queue at swap time would accidentally reschedule threads that had
just gone to sleep, because the is_thread_prevented_from_running()
predicate only tests for threads that are "suspended" or "pending" and
not sleeping.

Overload _THREAD_SUSPENDED to indicate "sleeping" also.  Simple fix
for an immediate bug, though long term we really want to unify all the
blocked conditions to prevent this kind of state bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 19:28:15 -04:00
Andy Ross e59d19628d kernel/sched: Rework prio validity assertion
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible.  That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).

But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.

Long term, we should probably remove this macro, it doesn't provide
much value.  But removing it in response to a static analysis failure
is... not very responsible as a development practice.

Fixes #14816
Fixes #14820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 09:53:55 -05:00
Andrew Boie f4631d5b43 kernel: amend comment in k_thread_create handler
This behavior is expected and not of any concern.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andrew Boie d0035f9779 kernel: fix stack size check in k_thread_create
The pointer arithmetic used didn't account for ARC
supervisor mode stacks, which are allocated at the
end of the stack object. Use the new macro to know
exactly how much space is reserved.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andy Ross 3dea408405 kernel/sched: Flag DEAD on correct thread in cross-CPU abort
Daniel Leung caught a good one: In the (SMP) case where we were
aborting a thread that was not currently scheduled, we were flagging
the DEAD state on _current and not the thread we were aborting!  This
wasn't as fatal as it seems, as the thread that called z_sched_abort()
would effectively go on living (as a zombie?) in a state where it
would always be preempted, but would otherwise remain scheduleable.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-19 13:39:24 -05:00
Andrew Boie 7ea211256e userspace: properly namespace handler functions
Now prefixed with z_hdlr_ instead of just hdlr_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie 50be938be5 userspace: renamespace some internal macros
These private macros are now all prefixed with Z_.

Fixes: #14447

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie b3eb510f5c kernel: fix atomic ops in user mode on some arches
Most CPUs have instructions like LOCK, LDREX/STREX, etc which
allows for atomic operations without locking interrupts that
can be invoked from user mode without complication. They typically
use compiler builtin atomic operations, or custom assembly
to implement them.

However, some CPUs may lack these kinds of instructions, such
as Cortex-M0 or some ARC. They use these C-based atomic
operation implementations instead. Unfortunately these require
grabbing a spinlock to ensure proper concurrency with other
threads and ISRs. Hence, they will trigger an exception when
called from user mode.

For these platforms, which support user mode but not atomic
operation instructions, the atomic API has been exposed as
system calls.

Some of the implementations in atomic_c.c which can be instead
expressed in terms of other atomic operations have been removed.

The kernel test of atomic operations now runs in user mode to
prove that this works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:18:00 -04:00
Andy Ross 722aeead91 kernel/sched: Nonatomic swap workaround update for qemu behavior
The workaround for nonatomic swap had yet another edge case: it would
save off the _current pointer when pending a thread so that the next
time slice interrupt could test it to see if the swap had actually
happened before assuming that _current could be rescheduled (if it
just pended itself, that's impossible).  Then it would clear the
pending_current pointer so future interrupts wouldn't be confused.

BUT: it turns out that qemu, when faced with really rapid timer rates
that exceed its (host-based) timing accuracy, is perfectly willing to
"stack up" timer interrupts such the one goes pending before the
previous one is finished executing.  In that case, we can enter the
SECOND timer interrupt, to try timeslicing a SECOND time, STILL before
the PendSV exception has run to actually effect the context switch.
Except this time pending_current has been cleared and we try to
reschedule the pended _current thread incorrectly.  In theory real
hardware could do this too, though it would involve absolutely crazy
interrupt latency problems.

Work around this by moving the clear to the thread itself, immediately
after it wakes up from the pend call it retakes a lock and clears
pending_current if it still matches _current.  That is not a perfect
fix: there remains a 2-3 instruction race at that moment where we
return from pend and before we can lock interrupts again where a timer
interrupt will see an incorrect pointer.  But I hammered at this and
couldn't make qemu do that (i.e. return from a timer interrupt but
flag a new one in just a cycle or two).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-15 05:50:43 +01:00
Ramakrishna Pallala e1639b5345 device: Extend device_set_power_state API to support async requests
The existing device_set_power_state() API works only in synchronous
mode and this is not desirable for devices(ex: Gyro) which take
longer time (few 100 mSec) to suspend/resume.

To support async mode, a new callback argument is added to the API.
The device drivers can asynchronously suspend/resume and call the
callback function upon completion of the async request.

This commit adds the missing callback parameter to all the drivers
to make it compliant with the new API.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2019-03-14 14:26:15 +01:00
Ioannis Glaropoulos cac20e91d8 kernel: userspace: correct documentation for Z_SYSCALL_MEMORY_ macros
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Ioannis Glaropoulos c686dd5064 kernel: enhance documentation of z_arch_buffer_validate
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Andy Ross ea1c99b11b kernel/sched: Fix k_yield() in SMP
This was always doing a remove/add of the _current thread to the run
queue, which is wrong because in SMP _current isn't in the queue to
remove.  But it went undetected until the recent dlist changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross 8c1bdda33c kernel/sched: Fix spinlock validation glitch in SMP
In SMP, we are setting the _current pointer while holding the
scheduler spinlock locally, which means that when we try to release it
the validation layer (not the spinlock per se) will scream at us
because the thread that took the lock doesn't match the one releasing
it.

Special case this when validation is enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross f37e0c6e4d kernel/spinlock: Fix race in spinlock validation
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing.  Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross b18685bcf1 kernel/sched: Clean up tracing hooks
The tracing fixes in commit e87193896a ("subsys: debug: tracing: Fix
thread tracing") were... not a readability win.  The point appears to
have been to put a tracing hook immediately before and after the
assignment to the _current pointer.  So do that in an abstracted
function and clean up _get_next_switch_handle() (which is a subtle and
important function already polluted with some unavoidable preprocessor
testing!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross 42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross 6ed59bc442 kernel/smp: Fix bitrot with the way the SMP "start flag" works
Tickless timers mean that k_busy_wait() won't work until after the
timer driver is initialized, which is very early but not as early as
SMP.  No need for it, just spin.

Also the original code used a stack variable for the start flag, which
racily presumed that _arch_start_cpu() would comes back synchronously
with the other CPU fired up and running right now.  The cleaned up smp
bringup API on x86_64 isn't so perky, so it exposed the bug.  The flag
just needs to be static.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross aed8288196 kernel/sched: Handle aboring _current correctly in SMP
In SMP, _current is not "queued".  (The run queue only stores
unscheduled threads because we can't rely on the head of the list
being _current).  We weren't updating the cache choice, which would
flag swap_ok, so calling k_thread_abort(_current) (for example, when a
thread exits from its entry function) would try to switch back into
the thread and then run off the end of the function.

Amusingly this was more benign than you'd think.  Stumbled on it by
accident.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross 0f8bee9c07 kernel/smp: Warning cleanup
When CONFIG_SMP is enabled but CONFIG_MP_NUM_CPUS is 1 (which is a
legal configuration, though a weird one) this static function ends up
being defined but unused, producing a compiler warning.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross c0183fdedd kernel/work_q: Fix locking across multiple queues
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).

This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more.  As it happens, that was needless.  The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion.  So go back to a global lock to
preserve the original behavior.

Fixes #14104

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-12 18:37:41 +01:00
Pawel Dunaj b87920bf3c kernel: Make heap smallest object size configurable
Allow application to chose the size of the smallest object taken from
the heap.

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2019-03-12 11:56:31 +01:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00