Commit graph

107 commits

Author SHA1 Message Date
Flavio Ceolin 625ac2e79f spinlock: Change function signature to return bool
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Andrew Boie f4631d5b43 kernel: amend comment in k_thread_create handler
This behavior is expected and not of any concern.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andrew Boie d0035f9779 kernel: fix stack size check in k_thread_create
The pointer arithmetic used didn't account for ARC
supervisor mode stacks, which are allocated at the
end of the stack object. Use the new macro to know
exactly how much space is reserved.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Andy Ross f37e0c6e4d kernel/spinlock: Fix race in spinlock validation
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing.  Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Andy Ross 42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Ioannis Glaropoulos d69c2f8129 kernel: documentatation for _setup_new_thread()
Add a note in the documentatation of _setup_new_thread()
function stating that the caller is responsible for
providing a size argument that corresponds to the availabe
thread stack area.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-09 11:57:24 -08:00
Andy Ross e456d0f7dd kernel/thread: Spinlockify
Straightforward spinlock around the global thread state.  Two changes
to the locking strategy were needed:

1. There was a needless recursive lock taken in schedule_new_thread().
This is only ever invoked in circumstances where the lock was already
held, or where there is no need for internal synchronization.

2. The recursive irq_lock() around the loop that spawns the initial
static threads (which happens at the start of main thread execution)
was removed.  Most of the job (i.e. making sure the threads don't run
before the loop is finished) was already duplicated by the sched_lock
it was already taking, and the attempt to promise that all the
timeouts happen on the same tick is already true by construction at
system startup on uniprocessor systems, and not possible to guarantee
at all under SMP (where other CPUs can take that timer interrupt).  We
don't document or test for this feature, so don't try to be fancy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 53cae5f471 kernel: Use _reschedule() instead of _Swap() where possible
These two spots were duplicating logic that is already done inside
_reschedule(), which is the cleaner, less dangerous API.  Use it where
possible when outside the scheduler internals.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Flavio Ceolin 6a4a86e413 kernel: Change k_is_in_isr to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin 09e362e0d0 kernel: Change _is_thread_essential to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Adithya Baglody 392219eab8 kernel: Change the prototype of k_thread_access_grant.
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.

There is one deviation from the old function. The last argument
shouldn't be NULL.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-03 12:35:14 -08:00
Andrew Boie 42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Andy Ross 5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 25863549be kernel: Remove clock_always_on control from k_busy_wait()
This feature was a useless noop based on mistaken API understanding.

The idea seems to have been that k_busy_wait() included guards to
ensure "clock_always_on" was true duing the loop, presumably because
the original author was afraid that "turning the clock off" would
affect the operation of k_cycle_get_32().

Then later someone came around and "optimized" this for Quark SE,
where the cycle counter is the RTC and unrelated to the timer driver
used by the clock_always_on feature.  (Except even there it presumably
should have been done at the SoC level and not just in the C1000
devboard -- note that Arduino 101 never would have gotten this).

But it was all a mistake: "clock_always_on" has nothing to do with
en/disabling the system cycle timer (which never happens when the
system is active, that's a feature of idle), it's a control over the
delivery of timer interrupts.  And needless to say we don't care about
timer interrupts when we're spinning on a cycle counter.

Yank the whole mess.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 220d4f8347 sys_clock.h: Make "global variable" APIs into proper functions
The existing API defined sys_clock_{hw_cycles,ticks}_per_sec as simple
"variables" to be shared, except that they were only real storage in
certain modes (the HPET driver, basically) and everywhere else they
were a build constant.

Properly, these should be an API defined by the timer driver (who
controls those rates) and consumed by the clock subsystem.  So give
them function syntax as a stepping stone to get there.

Note that this also removes the deprecated variable
_sys_clock_us_per_tick rather than give it the same treatment.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Anas Nashif c77c043071 kernel: remove deprecated k_thread_cancel
Remove deprecated function k_thread_cancel. We now use k_thread_abort.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-09 13:58:01 -04:00
Flavio Ceolin 18af4c6299 kernel: Fix overflow test problem introduced in 92ea2f9
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-04 05:20:29 -07:00
Flavio Ceolin ea716bf023 kernel: Explicitly comparing pointer with NULL
MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 92ea2f9189 kernel: Calling Z_SYSCALL_VERIFY_MSG with boolean expressions
Explicitly making a boolean expression when calling
Z_SYSCALL_VERIFY_MSG macro.

MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin d8837c6888 kernel: Using boolean expression on ASSERT macros
ASSERT macro expects a boolean expression, making it
explicit.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Paul Sokolovsky 2df1829c55 kernel: thread: Typo fixes in comment
Typo fixes in comment to k_thread_foreach().

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-09-26 17:46:23 +05:30
Ioannis Glaropoulos 66192618a7 arch: arm: Minor style and typo fixes in inline comments
Several style and typo fixes in inline comments of arm kernel
files and thread.c.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-09-24 04:56:34 -07:00
Flavio Ceolin c806ac3d36 kernel: Compare pointers with NULL in while statements
Make while statement using pointers explicitly check whether
the value is NULL or not.

The C standard does not say that the null pointer is the same
as the pointer to memory address 0 and because of this is a good
practice always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin b3d9202704 kernel: Using boolean constants instead of 0 or 1
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 8f72f245bd kernel: Explicitly check _abort_thread_timemout
A lot of times this API is called during some cleanup even if the
timeout was not set to make the code simpler. In these cases it's not
necessary checking the return. Adding a cast to acknowledge it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 5884c7f54b kernel: Explicitly ignoring _Swap return
Ignoring _Swap return where there is no treatment or nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Anas Nashif a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Daniel Leung fc182430c0 kernel: userspace: reserve stack space to store local data
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.

Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.

Fixes: #9067

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-17 09:40:52 -07:00
Flavio Ceolin 0866d18d03 irq: Fix irq_lock api usage
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.

In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Andrew Boie 7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Ramakrishna Pallala e74d85d816 kernel: thread: Simplify k_thread_foreach conditional inclusion
Simplify k_thread_foreach API conditional inclusion by putting
the whole logic under CONFIG_THREAD_MONITOR config option.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-07-18 15:42:28 -04:00
Spoorthi K 47a9f9a617 kernel: thread: Exclude deprecated function from lcov
Do not consider deprecated function for code coverage

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-07-18 13:26:18 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Michael Scott f669a08eea kernel: thread: fix _THREAD_DUMMY check in _check_stack_sentinel()
All other checks of thread_state use a bit wise & operator incase
there are other flags attached to the thread_state.  Let's fix
the only outlier in _check_stack_sentinel() to be the same.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-01 09:03:48 -04:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Adithya Baglody 5133cf56aa kernel: thread: Move out the function _thread_entry() to lib
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Ramakrishna Pallala 110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30