This test was written to assume ~100 Hz ticks in ways that are
difficult to fix. It wants to sleep for periods on the order of the
TICKLESS_IDLE_THRESH kconfig, which is extremely small on high tick
rate systems and (on nRF in particular) does not have a cleanly
divisible representation in milliseconds.
Fixing precision issues by cranking the idle threshold up on a
per-system basis seems like an abuse, as that is what we want to be
testing in the first place. Just let the test run at the tick rate it
has always expected.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The logic about minimal sleep sizes due to "tick" aliasing was
correct, but drivers also have similar behavior with "cycle" aliasing
too. When cycles are 3-4 orders of magnitude faster than ticks, it's
undetectable noise. But now on nRF they're exactly the same and we
need to correct for that, essentially doubling the number of ticks a
usleep() might wait for.
The logic here was simply too strict, basically. Fast tick rates
can't guarantee what the test promised.
Note that this relaxes the test bounds on the other side of the
equation too: it's no longer an error to usleep() for only one tick
(i.e. an improved sleep/timeout implementation no longer gets detected
as a test failure).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The scheduler API has always allowed setting a zero slice size as a
way to disable timeslicing. But the workaround introduced for
CONFIG_SWAP_NONATOMIC forgot that convention, and was calling
reset_time_slice() with that zero value (i.e. requesting an immediate
interrupt) in circumstances where z_swap() had been interrupted
nonatomically.
In practice, this never happened. And if it did, it was a single
spurious no-op interrupt that no one cared about. Until it did,
anyway...
Now that ticks on nRF devices are at full 32 kHz speed, we can get
into a situation where the rapidly triggering timeslice interrupts are
interrupting z_swap() calls, and the process feeds back on itself and
becomes self-sustaining.
Put that test into the time slice code itself to prevent this kind of
mistake in the future.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The default tick rate is now 10 kHz, but that driver was demanding
that it be exactly 1 kHz instead of at least that rate. I checked the
source, and the driver isn't actually extracting "ticks" from the
kernel illegally, it just needs fine-grained timers that work with the
existing millisecond API. Let it build, this is fine.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The sleep test was checking that the sleep took no longer than "2
ticks" longer than requested. But "2 ticks" for fast tick rate
configurations can be "zero ms", and for aliasing reasons it's always
possible to delay for 1 unit more than requested (becuase you can
cross a millisecond/tick/whatever boundary in your own code on either
side of the sleep). So that "slop" value needs to be no less than
1ms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The test for the k_uptime_delta utilities was calling it in a loop and
waiting for the uptime to advance. But the code was specifically
wanting it to advance 5ms or more at one go, which clearly isn't going
to work for a tick rate above 200 Hz.
The weird thing is that the test knew this and even commented about
the limitation. Which seems silly: it's perfectly fine for the clock
to advance just a single millisecond. That's correct behavior too.
Let's test for that, and it will work everywhere.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When ticks are sub-millisecond, the math produces minimum and maximum
values for the slice duration test that are equal. But because of
aliasing across tick boundaries, it's always possible (for any time
period, nothing specific to time slicing here) to measure one tick
more than an intended duration. So make sure there's always at least
a range of 1ms.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
As with the STM32 boards, these had an existing default for tick rate
that is now lower than the 10 kHz default. They're SysTick devices
that can handle the higher rate just fine. Use that.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The nRF timer runs at only 32 kHz, so there's little reason to try to
divide it to get a synthesized tick rate. Just use the raw clock as
the tick rate, which provides maximal precision and very
singnificantly simplifies the generated code for the ISR.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The ARC timer is a MHz-scale cycle counter and works very well with
the new 10 kHz default tick rate. Remove the settings for ARC
hardware.
Note that the nsim board definitions are left at 100 Hz. That is a
software emulation environment that (like qemu) exposes the host clock
as "real" time and thus is subject to clock jitter due to host
scheduling.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These all have what appears to be a promiscuously cut-and-pasted
declaration for a 1000 Hz tick rate. They are all SysTick boards and
will work very well with the new 10 kHz default, so use that instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When tickless is available, all existing devices can handle much
higher timing precision than 10ms. A 10kHz default seems acceptable
without introducing too much range limitation (rollover for a signed
time delta will happen at 2.5 days). Leave the 100 Hz default in
place for ticked configurations, as those are going to be special
purpose usages where the user probably actually cares about interrupt
rate.
Note that the defaulting logic interacts with an obscure trick:
setting the tick rate to zero would indicate "no clock exists" to the
configuration (some platforms use this to drop code from the build).
But now that becomes a kconfig cycle, so to break it we expose
CONFIG_SYS_CLOCK_EXISTS as an app-defined tunable and not a derived
value from the tick rate. Only one test actually did this.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Tick rate is becoming a platform tunable in the tickless world. Some
apps were setting it due to requirements of drivers or subsystems (or
sometimes for reasons that don't make much sense), but the dependency
goes the other way around now: board/soc/arch level code is
responsible for setting tick rates that work with their devices.
A few tests still use hard-configured tick rates, as they have
baked-in assumptions (like e.g. "a tick will be longer than a
millisecond") that need to be addressed first.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This code was clearly written to assume that the timeout argument to
k_mem_pool_alloc() was in ticks and not ms. Adjust to what appears to
have been the intent. It was working as intended (i.e waiting one or
1/10th of a second) only on systems where the default tick rate was
100 Hz.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When the tick rate was less than MIN_DELAY, bumping a "too soon"
expiration by just one tick may not be enough and we could
theoretically miss the counter.
Instead, eliminate the MIN_DELAY computation and write to the spec:
NRF guarantees that the RTC will generate an interrupt for a
comparator value two cycles in the future. And further, we can test
at the set point to see if we "just missed" the interrupt (i.e. zero
cycles delay) and flag a synchronous interrupt. So we only need to
miss a requested interrupt now for the special case of exactly one
cycle in the future, and then we're only late by one cycle. That's
optimal.
Also fixes an off-by-one in the next cycle computation. By API
convention, an ticks argument of one or less means "at the next tick"
and not "right now". So we need to add one to the target cycle to
avoid incorrectly triggering a synchronous interrupt. This was a
non-issue when a tick is longer than a hardware cycle but is needed
now.
Also handles the edge case with zero latency interrupts (which are
unmaskable) which might mess up timing. This was always a problem,
but we're more sensitive now and it's comparatively more likely to
occur.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We have a number of timing sensitive tests which run
correctly on a much more frequent basis if the system
is not so heavily loaded. Instead of squeezing a few
more crumbs of performance by doubling the CPU count,
just use the number of CPUs reported by the system.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* if thread switchs in interrupt, the target sp must be in
thread's kernel stack, no need to do hardware sp switch
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
If maxsize is smaller than _MPOOL_MINBLK, then Z_MPOOL_LVLS() will be 0.
That means the loop in z_sys_mem_pool_base_init() that initializes the
block free list for the nonexistent level 0 will corrupt whatever memory
at the location the zero-sized struct sys_mem_pool_lvl array was
located. And the corruption happens to be done with a perfectly legit
memory pool block address which makes for really nasty bugs to solve.
This is more likely on 64-bit systems due to _MPOOL_MINBLK being twice
the size of 32-bit systems.
Let's prevent that with a build-time assertion on maxsize when defining
a memory pool, and adjust the affected test accordingly.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The "bits" field in struct sys_mem_pool_lvl is unioned with a pointer.
That leaves more space for inline free bits on 64-bit targets.
Let's declare it as an array and adjust its size based on the pointer
size. On 32-bit targets the generated code remains identical.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Minimum alignment and rounding must be done on a word boundary. Let's
replace _ALIGN4() with WB_UP() which is equivalent on 32-bit targets,
and 64-bit aware.
Also enforce a minimal alignment on the memory pool. This is making
a difference mostly on64-bit targets where the widely used 4-byte
alignment is not sufficient.
The _ALIGN4() macro has no users left so it is removed.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If GCOV coverage is enabled, the coverage dump happens after
"PROJECT EXECUTION SUCCESSFUL" is printed. In some cases,
the additional time added was not enough to capture all the
GCOV output on a heavily loaded system before the emulator
gets killed.
Ideally, the decision to kill the emulator needs to be smarter
and less race-prone, but that can wait for a future
enhancement.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
UART related defines in STM32F7 files where filled with
references to USART.
Instances 4, 5, 7 and 8 of SoC serial port are actually UARTs.
So rename define's accordingly. Otherwise it couldn't build.
Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
The MVIC is no longer supported, and only the APIC-based interrupt
subsystem remains. Thus this layer of indirection is unnecessary.
This also corrects an oversight left over from the Jailhouse x2APIC
implementation affecting EOI delivery for direct ISRs only.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Don't allow inadvertent use of the existing z_x86_msr_read() when
compiled in long mode (CONFIG_X86_LONGMODE) as it won't work.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
These inlines currently only apply to IA32, so place accordingly.
Minor changes to direct and indirect users of the file for ordering.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This file is only included from arch.h, so merge it into same. This
also avoids confusion with files in arch/x86/include/ of the same name.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
The compiler is going to make better per-arch/per-implementation
choices about bit operations, so let's use the common definitions.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This header is currently IA32-specific, so move it into the subarch
directory and update references to it.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Making room for the Intel64 subarch in this tree. This header is
32-bit specific and so it's relocated, and references rewritten
to find it in its new location.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This file is currently IA32-specific, so it is moved and the
reference to it at the arch-independent layer is moved.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This file is 32-bit specific, so it is moved into the ia32/ directory
and references to it are updated accordingly.
Also, SP_ARG* definitions are no longer used, so they are removed.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Eliminate definitions for MSRs that we don't use. Centralize the
definitions for the MSRs that we do use, including their fields.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This pattern exists in both the include/arch/x86 and arch/x86/include
trees. This indirection is historic and unnecessary, as all supported
toolchains for x86 support gas/gcc-style inline assembly.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
z_arm_do_syscall is executing in privileged mode. This implies
that we shall not be allowed to use the thread's default
unprivileged stack, (i.e push to or pop from it), to avoid any
possible stack corruptions.
Note that since we execute in PRIV mode and no MPU guard or
PSPLIM register is guarding the end of the default stack, we
won't be able to detect any stack overflows.
This commit implement the above change, by forcing
z_arm_do_syscall() to FIRST switch to privileged
stack and then do all the preparations to execute
the system call.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We need to correct the inline comment in swap_helper.S,
which is suggesting that system call attempts with
invalid syscall IDs (i.e. above the limit) do not force
the CPU to elevate privileges. This is in fact not true,
since the execution flow moves into valid syscall ID
handling.
In other words, all we do for system calls with invalid
ID numbers is to treat them as valid syscalls with the
K_SYSCALL_BAD ID value.
We fix the inline documentation to reflect the actual
execution flow.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
System call arguments are indexed from 1 to 6, so arg0
is corrected to arg1 in two occasions. In addition, the
ARM function for system calls is now called z_arm_do_syscall,
so we update the inline comment in __svc handler.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The IPC peripheral is missing from the list of
supported HW for nRF9160, so this commit adds
that.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
target_sources() documentation states:
Relative source file paths are interpreted as being relative to the
current source directory (i.e. CMAKE_CURRENT_SOURCE_DIR).
Remove spurious code duplicating cmake's behaviour. It proved to be a
time-consuming red herring while debugging some path-related issue and
"less is more".
Signed-off-by: Marc Herbert <marc.herbert@intel.com>
This commit changes the TX priority from ID based priority to
chronological order. The advantage is that when messages with
the same ID are sent, the order is retained.
Signed-off-by: Alexander Wachter <alexander.wachter@student.tugraz.at>
Removed the STM32 CAN_Init function and implemented the initialization
in the driver.
Signed-off-by: Alexander Wachter <alexander.wachter@student.tugraz.at>
So the time used to run boards which use the BinaryHandler can be
reported, record the time used from spawning the process until
it finnishes or is killed.
The BinaryHandler is used by the "native" boards, unit tests,
nsim and Renode.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>