Commit graph

65 commits

Author SHA1 Message Date
Anas Nashif 6b6ecc0803 Revert "kernel: Enable interrupts for MULTITHREADING=n on supported arch's"
This reverts commit 17e9d623b4.

Single thread keep introducing more issues, decided to remove the
feature completely and push any required changes for after 1.13.

See #9808

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-06 13:09:26 -04:00
Andy Ross 8daafd4fba kernel: Final spin in !MULTITHREADING should be locked
Now that we call main() with interrupts enabled in !MULTITHREADING, we
need to disable them again for the final fallback "loop-forever
because user code returned" state.  Otherwise some architectures will
toss interrupts into a context where we obviously aren't prepared.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-30 13:29:09 -04:00
Andy Ross 17e9d623b4 kernel: Enable interrupts for MULTITHREADING=n on supported arch's
Some applications have a use case for a tiny MULTITHREADING=n build
(which lacks most of the kernel) but still want special-purpose
drivers in that mode that might need to handle interupts.  This
creates a chicken and egg problem, as arch code (for obvious reasons)
runs _Cstart() with interrupts disabled, and enables them only on
switching into a newly created thread context.  Zephyr does not have a
"turn interrupts on now, please" API at the architecture level.

So this creates one as an arch-specific wrapper around
_arch_irq_unlock().  It's implemented as an optional macro the arch
can define to enable this behavior, falling back to the previous
scheme (and printing a helpful message) if it doesn't find it defined.
Only ARM and x86 are enabled in this patch.

Fixes #8393

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-27 16:15:10 -04:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Flavio Ceolin 6699423a2f kernel: Explicitly ignoring memcpy return
memcpy always return a pointer to dest, it can be ignored. Just making
it explicitly so compilers will never raise warnings/errors to this.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Shawn Mosley 573f32b6d2 userspace: compartmentalized app memory organization
Summary: revised attempt at addressing issue 6290.  The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains.  Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications.  This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.

Overview: The current patch uses qualifiers to group data into
subsections.  The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.

Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main.  This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.

Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.

In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated.  This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file.  This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.

Usage:
 - Requires: app_memory/app_memdomain.h .
 - _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
 - _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
 - These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
 - To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
 - To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
 - To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
 - Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
 - After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
 - The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
  Example:
        /* create partition at top of file outside functions */
        app_mem_partition(part0);
        /* create domain */
        app_mem_domain(dom0);
        _app_dmem(dom0) int var1;
        _app_bmem(dom0) static volatile int var2;

        int main()
        {
                init_part_part0();
                init_app_memory();
                init_domain_dom0(part0);
                add_thread_dom0(k_current_get());
                ...
        }

 - If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:

        FOR_EACH(app_mem_partition, part0, part1, part2);

or, for multiple domains, similarly:

        FOR_EACH(app_mem_domain, dom0, dom1);

Similarly, the init_part_* can also be used in the macro:

        FOR_EACH(init_part, part0, part1, part2);

Testing:
 - This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board.  It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards.  These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
 - When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT.  This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
 - This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.

Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon).  In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses.  Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.

Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component.  This commit
seperates the Kconfig definition from the definition used for the
conditional code.  The change is in response to changes in the
way the build system treats definitions.  The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives.  A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM.  By addining the default linker script
the prebuild stages link properly prior to the python script running

Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
2018-07-25 12:02:01 -07:00
Anas Nashif eda3e16ac7 coverage: exclude k_call_stacks_analyze from coverage
Do not count deprecated functions.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-18 10:39:48 -04:00
Krzysztof Chruscinski 6b01c89935 logging: Add log initialization to system startup
Log API can be used before user can explicitly initialize the logger.
In order to ensure that logger core is ready to buffer log messages
it must be initialize as early as possible. Initialization does not
include initialization of default backend since driver may not be
ready and backend is needed only when log messages are processed.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2018-07-14 08:32:44 -04:00
Andy Ross 3d14615f56 kernel: Restore CONFIG_MULTITHREADING=n behavior
The prepare_multithreading()/switch_to_main_thread() steps were being
done unconditionally, when with multhreading disabled we want to jump
straight into the main thread on the existing stack.

Needless to say, that doesn't work well.  Fixes #8361.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-13 17:23:05 -04:00
Carles Cufi b54644913d kernel: Use IS-specific entropy function when available
During the early boot process, in prepare_multithreading(), the kernel
structures and scheduler are not ready yet. In order to obtain entropy
for early works such as stack randomization, optionally use when present
the ISR-specific function that some drivers will provide.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-05-24 15:13:13 -07:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Leandro Pereira 389c36439a kernel: init: Use entropy API directly to initialize stack canary
Some sys_rand32_get() implementation will use shared state and protect
that using some synchronization primitive such as a mutex or a
semaphore.  It's too early in the boot process to use any of them,
which causes some issues.

Use the entropy API directly to set up the stack canaries.

This doesn't completely solve the problem, as some drivers will use the
same synchronization primitives anyway.  Some drivers (e.g.  the NRF5
entropy driver) provide an API to be used by ISRs that might be
suitable here, but not all drivers do that.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-23 14:42:49 -07:00
Andrew Boie 982d5c8f55 init: run kernel_arch_init() earlier
This was in prepare_multithreading(), which was moved
to after driver initialization and not before it.
The function now really just prepares system threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 17:22:19 -04:00
Andrew Boie 4afc6c9ff2 kernel: remove STACK_ALIGN checks
STACK_ALIGN has somewhat different semantics across our arches,
particularly ARC.

These checks are unnecessary, _new_thread() is required
to properly align stack sizes anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 15:05:15 -05:00
Andrew Boie 72c7ded561 kernel: prepare threads after PRE_KERNEL*
prepare_multithreading() was done very early as it had a call
to initialize the interrupt subsystem. This was causing problems
with stack pointer randomization as any HW-based entropy drivers
had not been initialized.

Move the call to initialize the interrupt system out of
prepare_multithreading(), which now really does just prepare
the system to start threads. This is now done after the PRE_KERNEL
phases.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-22 15:59:07 -07:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross eb258706e0 kernel: Move SMP initialization to start of main thread
The smp_init() call was too early.  Device and subsystem
initialization doesn't happen until after the main thread starts
running.  Starting extra CPUs and allowing them to schedule threads
before their drivers are alive is a bad idea, even if it works in a
unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Anas Nashif daf7716ddd build: use git version and hash for boot banner
This uses the version and hash (git describe) and replaces the timestamp
currently used in the boot banner. This works much better than using
timestamps. It lets us point to the exact commit being used to run a
certain application or test.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-10 10:57:50 -04:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Leandro Pereira a1ae8453f7 kernel: Name of static functions should not begin with an underscore
Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-10 08:39:10 -05:00
Andy Ross 245b54ed56 kernel/include: Missed nano_internal.h -> kernel_internal.h spots
Update heading naming given recent rename

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bdcd18a744 kernel: Enable SMP
Now that all the pieces are in place, enable SMP for real:

Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 780ba23eb8 kernel: Create idle threads and interrupt stacks for SMP processors
Simple implementation that caps at 4 CPUs.  Long term we should use
some linker magic to define as many as needed and loop over them
without needlessly increasing data or code size for the tracking.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Erwin Rol 1dc41d19b3 kernel: init: initialize stm32 ccm sections
Initialize the ccm_bss section to zero.
Copy the ccm_data section from the rom section.

Signed-off-by: Erwin Rol <erwin@erwinrol.com>
2018-02-13 12:36:22 -06:00
Adithya Baglody 7bb40bd9ca kernel: init: mem_domain structure is initialized for dummy thread.
For the dummy thread, contents in the mem_domain structure
is insignificant hence setting it to NULL.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Adithya Baglody edd072e730 tests: benchmarking: cleanup of the benchmarking code.
The kernel will no longer reference the code written in the
test folder.

GH-1236

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-02 09:01:06 -04:00
Leandro Pereira adce1d1888 subsys: Add random subsystem
Some "random" drivers are not drivers at all: they just implement the
function `sys_rand32_get()`.  Move those to a random subsystem in
preparation for a reorganization.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-11-01 08:26:29 -04:00
Youvedeep Singh 9644f6782e kernel: boot_delay: change to busy wait instaed of wait
Intention of CONFIG_BOOT_DELAY is to delay booting of system for certain
time. Currently it is only delaying start of _main thread as delay is
created using k_sleep. This leads to putting _main thread into timeout
queue and continue kernel boot. This is causing some of undesirable
effects in some of test Automation usecase.
This patch changes k_sleep to k_busy_wait which result in delay in OS
boot instead of delaying start of _main.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-28 14:20:25 -04:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Anas Nashif 8920cf127a cleanup: Move #include directives
Move all #include directives at the very top of the file, before any
code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-11 12:41:07 -04:00
Andrew Boie 0a85eaad05 init: initialize dummy thread stack info
Garbage values here could wreak havoc on the initial switch to main
depending on how arch-specific _Swap() manages memory permissions when
switching threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:34:41 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Inaky Perez-Gonzalez 1abd064ce7 boot: move boot banner and delay before SYS_INIT_LEVEL_APPLICATION
Fixes https://github.com/zephyrproject-rtos/zephyr/issues/1280, but
also many other failures, where output was garbled due to this. Other
similarly affected issues are missing first benchmark (context) in
latency benchmark and some net tests.

Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-09-07 18:29:05 -05:00
Youvedeep Singh 76b577e180 tests: benchmark: timing_info: Change API/variable Name.
The API/Variable names in timing_info looks very speicific to
platform (like systick etc), whereas these variabled are used
across platforms (nrf/arm/quark).
So this patch :-
1. changing API/Variable names to generic one.
2. Creating some of Macros whose implimentation is platform
depenent.

Jira: ZEP-2314

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-31 14:25:31 -04:00
Anas Nashif 83088a235c kernel: init: print boot banner before static threads
The boot banner is being printed after static threads have started, for
example this is visible with tests using ztest.
This puts the banner message before starting any threads.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-24 10:51:04 -04:00
Wayne Ren f8d061faf7 arch: arc: add nested interrupt support
* add nested interrupt support for interrupts
   + use a varibale exc_nest_count to trace nest interrupt and exception
   + regular interrupts can be nested by regular interrupts and fast
interrupts
   + fast interrupt's priority is the highest, cannot be nested
* remove the firq stack and exception stack
   + remove the coressponding kconfig option
   + all interrupts (normal and fast) and exceptions will be handled
     in the same stack (_interrupt stack)
   + the pros are, smaller memory footprint (no firq stack), simpler
     stack management, simpler codes, etc.. The cons are, possible
     10-15 instructions overhead for the case where fast irq nests
     regular irq
* add the case of ARC in test/kernel/gen_isr_table

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-10 12:47:15 -04:00
Andrew Boie 0fab8a6dc5 x86: page-aligned stacks with guard page
Subsequent patches will set this guard page as unmapped,
triggering a page fault on access. If this is due to
stack overflow, a double fault will be triggered,
which we are now capable of handling with a switch to
a know good stack.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Inaky Perez-Gonzalez c51f73f77f boot: add CONFIG_BOOT_DELAY option
Introduce a configurable boot delay option (defaulting to none) that
happens right after printing a boot delay banner, #before calling
main() in kernel/init.c:_main(), before taking timestamps for _main()
and once all the infrastructure is in place. Move also the boot banner
to happen after this delay.

The rationale for this is some boards will boot really fast and print
out some test case output in the serial port before the system that is
monitoring the serial port is able to read from the serial port.

This happens in MCUs whose serial port is embedded in a USB connection
which also is used to power the MCU board. When powering it on by
powering the USB port, there is a time it takes the host system to
detect the USB connection, enumerate the serial port, configure it and
load, start and read from the serial port. At this time, it might have
printed the output of the serial port.

While manually it is possible to press a reset button, on automation
setups this adds a lot of overhead and cabling or modifications to the
MCU that are easier (and cheaper) to overcome with this delay. Other
options (like using a separate serial line) might not be possible or
add a lot of cabling and cost, plus it'd also add extra build
configuration.

Change-Id: I2f4d1ba356de6cefa19b4ef5c9f19f87885d4dfd
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-07-18 08:31:45 +03:00
Andrew Boie bf5228ea56 kernel: add early init routines for app RAM
Applications will have their own BSS and data sections which
will need to be additionally copied.

This covers the common C implementation of these functions.
Arches which implement their own optimized versions will need
to be updated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-29 07:46:58 -04:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Adithya Baglody be1cb961ad tests: benchmark: boot_time: Reading time stamps made arch agnostic
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.

JIRA: ZEP-1426

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-16 07:37:37 -05:00
Andrew Boie dc5d935d12 kernel: introduce stack definition macros
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.

Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-09 18:53:28 -04:00
Andrew Boie 50a533f7a5 kernel: init: mark initial dummy thread
The initial dummy thread context used for the initial __swap to
the main thread at early kernel initialization was not marked as a dummy
thread as it ought to be.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00