2016-02-22 22:28:10 +01:00
|
|
|
#!/usr/bin/env python3
|
2017-04-05 00:47:49 +02:00
|
|
|
# vim: set syntax=python ts=4 :
|
2019-04-06 15:08:09 +02:00
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Zephyr Sanity Tests
|
|
|
|
|
2019-04-08 23:02:34 +02:00
|
|
|
Also check the "User and Developer Guides" at https://docs.zephyrproject.org/
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
This script scans for the set of unit test applications in the git
|
|
|
|
repository and attempts to execute them. By default, it tries to
|
|
|
|
build each test case on one platform per architecture, using a precedence
|
2017-06-16 00:31:54 +02:00
|
|
|
list defined in an architecture configuration file, and if possible
|
2019-06-22 17:04:10 +02:00
|
|
|
run the tests in any available emulators or simulators on the system.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
Test cases are detected by the presence of a 'testcase.yaml' or a sample.yaml
|
|
|
|
files in the application's project directory. This file may contain one or more
|
|
|
|
blocks, each identifying a test scenario. The title of the block is a name for
|
|
|
|
the test case, which only needs to be unique for the test cases specified in
|
|
|
|
that testcase meta-data. The full canonical name for each test case is <path to
|
|
|
|
test case>/<block>.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
Each test block in the testcase meta data can define the following key/value
|
|
|
|
pairs:
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-10-04 22:14:27 +02:00
|
|
|
tags: <list of tags> (required)
|
2015-07-17 21:03:52 +02:00
|
|
|
A set of string tags for the testcase. Usually pertains to
|
|
|
|
functional domains but can be anything. Command line invocations
|
|
|
|
of this script can filter the set of tests to run based on tag.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
skip: <True|False> (default False)
|
2015-10-12 19:10:57 +02:00
|
|
|
skip testcase unconditionally. This can be used for broken tests.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
slow: <True|False> (default False)
|
2019-01-03 23:17:43 +01:00
|
|
|
Don't build or run this test case unless --enable-slow was passed
|
|
|
|
in on the command line. Intended for time-consuming test cases
|
|
|
|
that are only run under certain circumstances, like daily
|
|
|
|
builds.
|
2016-02-10 22:39:00 +01:00
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
extra_args: <list of extra arguments>
|
2017-11-09 12:25:02 +01:00
|
|
|
Extra cache entries to pass to CMake when building or running the
|
2015-07-17 21:03:52 +02:00
|
|
|
test case.
|
|
|
|
|
2017-10-17 15:00:33 +02:00
|
|
|
extra_configs: <list of extra configurations>
|
|
|
|
Extra configuration options to be merged with a master prj.conf
|
|
|
|
when building or running the test case.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
build_only: <True|False> (default False)
|
2019-04-08 23:02:34 +02:00
|
|
|
If true, don't try to run the test even if the selected platform
|
|
|
|
supports it.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
build_on_all: <True|False> (default False)
|
|
|
|
If true, attempt to build test on all available platforms.
|
|
|
|
|
|
|
|
depends_on: <list of features>
|
|
|
|
A board or platform can announce what features it supports, this option
|
|
|
|
will enable the test only those platforms that provide this feature.
|
|
|
|
|
|
|
|
min_ram: <integer>
|
|
|
|
minimum amount of RAM needed for this test to build and run. This is
|
|
|
|
compared with information provided by the board metadata.
|
|
|
|
|
|
|
|
min_flash: <integer>
|
|
|
|
minimum amount of ROM needed for this test to build and run. This is
|
|
|
|
compared with information provided by the board metadata.
|
|
|
|
|
|
|
|
timeout: <number of seconds>
|
2019-06-22 17:04:10 +02:00
|
|
|
Length of time to run test in emulator before automatically killing it.
|
2015-07-17 21:03:52 +02:00
|
|
|
Default to 60 seconds.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
arch_whitelist: <list of arches, such as x86, arm, arc>
|
2015-07-17 21:03:52 +02:00
|
|
|
Set of architectures that this test case should only be run for.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
arch_exclude: <list of arches, such as x86, arm, arc>
|
2015-10-05 16:02:45 +02:00
|
|
|
Set of architectures that this test case should not run on.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
platform_whitelist: <list of platforms>
|
2015-10-05 16:02:45 +02:00
|
|
|
Set of platforms that this test case should only be run for.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
platform_exclude: <list of platforms>
|
2015-10-05 16:02:45 +02:00
|
|
|
Set of platforms that this test case should not run on.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
extra_sections: <list of extra binary sections>
|
2016-11-29 21:21:59 +01:00
|
|
|
When computing sizes, sanitycheck will report errors if it finds
|
|
|
|
extra, unexpected sections in the Zephyr binary unless they are named
|
|
|
|
here. They will not be included in the size calculation.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
filter: <expression>
|
2016-03-24 22:46:00 +01:00
|
|
|
Filter whether the testcase should be run by evaluating an expression
|
|
|
|
against an environment containing the following values:
|
|
|
|
|
|
|
|
{ ARCH : <architecture>,
|
|
|
|
PLATFORM : <platform>,
|
2016-08-08 19:24:59 +02:00
|
|
|
<all CONFIG_* key/value pairs in the test's generated defconfig>,
|
2019-01-09 14:46:42 +01:00
|
|
|
<all DT_* key/value pairs in the test's generated device tree file>,
|
|
|
|
<all CMake key/value pairs in the test's generated CMakeCache.txt file>,
|
2016-08-08 19:24:59 +02:00
|
|
|
*<env>: any environment variable available
|
2016-03-24 22:46:00 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
The grammar for the expression language is as follows:
|
|
|
|
|
|
|
|
expression ::= expression "and" expression
|
|
|
|
| expression "or" expression
|
|
|
|
| "not" expression
|
|
|
|
| "(" expression ")"
|
|
|
|
| symbol "==" constant
|
|
|
|
| symbol "!=" constant
|
|
|
|
| symbol "<" number
|
|
|
|
| symbol ">" number
|
|
|
|
| symbol ">=" number
|
|
|
|
| symbol "<=" number
|
|
|
|
| symbol "in" list
|
2016-06-02 21:27:54 +02:00
|
|
|
| symbol ":" string
|
2016-03-24 22:46:00 +01:00
|
|
|
| symbol
|
|
|
|
|
|
|
|
list ::= "[" list_contents "]"
|
|
|
|
|
|
|
|
list_contents ::= constant
|
|
|
|
| list_contents "," constant
|
|
|
|
|
|
|
|
constant ::= number
|
|
|
|
| string
|
|
|
|
|
|
|
|
|
|
|
|
For the case where expression ::= symbol, it evaluates to true
|
|
|
|
if the symbol is defined to a non-empty string.
|
|
|
|
|
|
|
|
Operator precedence, starting from lowest to highest:
|
|
|
|
|
|
|
|
or (left associative)
|
|
|
|
and (left associative)
|
|
|
|
not (right associative)
|
|
|
|
all comparison operators (non-associative)
|
|
|
|
|
|
|
|
arch_whitelist, arch_exclude, platform_whitelist, platform_exclude
|
|
|
|
are all syntactic sugar for these expressions. For instance
|
|
|
|
|
|
|
|
arch_exclude = x86 arc
|
|
|
|
|
|
|
|
Is the same as:
|
|
|
|
|
|
|
|
filter = not ARCH in ["x86", "arc"]
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2016-06-02 21:27:54 +02:00
|
|
|
The ':' operator compiles the string argument as a regular expression,
|
|
|
|
and then returns a true value only if the symbol's value in the environment
|
2019-07-12 16:54:35 +02:00
|
|
|
matches. For example, if CONFIG_SOC="stm32f107xc" then
|
2016-06-02 21:27:54 +02:00
|
|
|
|
2019-07-12 16:54:35 +02:00
|
|
|
filter = CONFIG_SOC : "stm.*"
|
2016-06-02 21:27:54 +02:00
|
|
|
|
|
|
|
Would match it.
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
The set of test cases that actually run depends on directives in the testcase
|
|
|
|
filed and options passed in on the command line. If there is any confusion,
|
2019-11-20 12:47:27 +01:00
|
|
|
running with -v or examining the discard report (sanitycheck_discard.csv)
|
|
|
|
can help show why particular test cases were skipped.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
Metrics (such as pass/fail state and binary size) for the last code
|
|
|
|
release are stored in scripts/sanity_chk/sanity_last_release.csv.
|
|
|
|
To update this, pass the --all --release options.
|
|
|
|
|
2016-10-25 01:00:58 +02:00
|
|
|
To load arguments from a file, write '+' before the file name, e.g.,
|
|
|
|
+file_name. File content must be one or more valid arguments separated by
|
|
|
|
line break instead of white spaces.
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
Most everyday users will run with no arguments.
|
2019-01-03 23:17:43 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
|
|
|
|
2019-01-21 15:48:46 +01:00
|
|
|
import os
|
2018-04-22 05:26:48 +02:00
|
|
|
import contextlib
|
2018-10-15 15:45:59 +02:00
|
|
|
import string
|
2018-04-22 05:26:48 +02:00
|
|
|
import mmap
|
2015-07-17 21:03:52 +02:00
|
|
|
import argparse
|
|
|
|
import sys
|
|
|
|
import re
|
|
|
|
import subprocess
|
|
|
|
import multiprocessing
|
|
|
|
import select
|
|
|
|
import shutil
|
2019-04-30 23:11:29 +02:00
|
|
|
import shlex
|
2015-07-17 21:03:52 +02:00
|
|
|
import signal
|
|
|
|
import threading
|
2019-06-22 17:04:10 +02:00
|
|
|
import concurrent.futures
|
|
|
|
from threading import BoundedSemaphore
|
|
|
|
import queue
|
2015-07-17 21:03:52 +02:00
|
|
|
import time
|
2019-04-11 14:38:21 +02:00
|
|
|
import datetime
|
2015-07-17 21:03:52 +02:00
|
|
|
import csv
|
2019-06-22 17:04:10 +02:00
|
|
|
import yaml
|
2015-10-02 19:04:56 +02:00
|
|
|
import glob
|
2018-02-19 17:57:03 +01:00
|
|
|
import serial
|
2016-04-07 21:10:25 +02:00
|
|
|
import concurrent
|
2017-04-13 20:44:48 +02:00
|
|
|
import xml.etree.ElementTree as ET
|
2017-05-14 03:31:53 +02:00
|
|
|
from collections import OrderedDict
|
|
|
|
from itertools import islice
|
2018-07-11 22:09:22 +02:00
|
|
|
from pathlib import Path
|
|
|
|
from distutils.spawn import find_executable
|
2019-12-01 19:55:11 +01:00
|
|
|
try:
|
|
|
|
from anytree import Node, RenderTree, find
|
|
|
|
except ImportError:
|
|
|
|
print("Install the anytree module to use the --test-tree option")
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-09-13 00:08:43 +02:00
|
|
|
ZEPHYR_BASE = os.getenv("ZEPHYR_BASE")
|
|
|
|
if not ZEPHYR_BASE:
|
|
|
|
sys.exit("$ZEPHYR_BASE environment variable undefined")
|
|
|
|
|
|
|
|
sys.path.insert(0, os.path.join(ZEPHYR_BASE, "scripts", "dts"))
|
|
|
|
import edtlib
|
|
|
|
|
2017-07-24 19:24:35 +02:00
|
|
|
import logging
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
|
|
|
|
hw_map_local = threading.Lock()
|
2019-10-11 16:32:45 +02:00
|
|
|
report_lock = threading.Lock()
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2017-07-24 19:24:35 +02:00
|
|
|
log_format = "%(levelname)s %(name)s::%(module)s.%(funcName)s():%(lineno)d: %(message)s"
|
2017-12-05 21:28:44 +01:00
|
|
|
logging.basicConfig(format=log_format, level=30)
|
2017-07-24 19:24:35 +02:00
|
|
|
|
2019-04-16 02:58:45 +02:00
|
|
|
# Use this for internal comparisons; that's what canonicalization is
|
|
|
|
# for. Don't use it when invoking other components of the build system
|
|
|
|
# to avoid confusing and hard to trace inconsistencies in error messages
|
|
|
|
# and logs, generated Makefiles, etc. compared to when users invoke these
|
|
|
|
# components directly.
|
|
|
|
# Note "normalization" is different from canonicalization, see os.path.
|
|
|
|
canonical_zephyr_base = os.path.realpath(ZEPHYR_BASE)
|
|
|
|
|
2016-03-24 22:46:00 +01:00
|
|
|
sys.path.insert(0, os.path.join(ZEPHYR_BASE, "scripts/"))
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
from sanity_chk import scl
|
|
|
|
from sanity_chk import expr_parser
|
|
|
|
|
2016-03-24 22:46:00 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
VERBOSE = 0
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
RELEASE_DATA = os.path.join(ZEPHYR_BASE, "scripts", "sanity_chk",
|
|
|
|
"sanity_last_release.csv")
|
|
|
|
|
|
|
|
if os.isatty(sys.stdout.fileno()):
|
|
|
|
TERMINAL = True
|
|
|
|
COLOR_NORMAL = '\033[0m'
|
|
|
|
COLOR_RED = '\033[91m'
|
|
|
|
COLOR_GREEN = '\033[92m'
|
|
|
|
COLOR_YELLOW = '\033[93m'
|
|
|
|
else:
|
|
|
|
TERMINAL = False
|
|
|
|
COLOR_NORMAL = ""
|
|
|
|
COLOR_RED = ""
|
|
|
|
COLOR_GREEN = ""
|
|
|
|
COLOR_YELLOW = ""
|
|
|
|
|
2019-01-09 14:46:42 +01:00
|
|
|
class CMakeCacheEntry:
|
|
|
|
'''Represents a CMake cache entry.
|
|
|
|
|
|
|
|
This class understands the type system in a CMakeCache.txt, and
|
|
|
|
converts the following cache types to Python types:
|
|
|
|
|
|
|
|
Cache Type Python type
|
|
|
|
---------- -------------------------------------------
|
|
|
|
FILEPATH str
|
|
|
|
PATH str
|
|
|
|
STRING str OR list of str (if ';' is in the value)
|
|
|
|
BOOL bool
|
|
|
|
INTERNAL str OR list of str (if ';' is in the value)
|
|
|
|
---------- -------------------------------------------
|
|
|
|
'''
|
|
|
|
|
|
|
|
# Regular expression for a cache entry.
|
|
|
|
#
|
|
|
|
# CMake variable names can include escape characters, allowing a
|
|
|
|
# wider set of names than is easy to match with a regular
|
|
|
|
# expression. To be permissive here, use a non-greedy match up to
|
|
|
|
# the first colon (':'). This breaks if the variable name has a
|
|
|
|
# colon inside, but it's good enough.
|
|
|
|
CACHE_ENTRY = re.compile(
|
|
|
|
r'''(?P<name>.*?) # name
|
|
|
|
:(?P<type>FILEPATH|PATH|STRING|BOOL|INTERNAL) # type
|
|
|
|
=(?P<value>.*) # value
|
|
|
|
''', re.X)
|
|
|
|
|
|
|
|
@classmethod
|
|
|
|
def _to_bool(cls, val):
|
|
|
|
# Convert a CMake BOOL string into a Python bool.
|
|
|
|
#
|
|
|
|
# "True if the constant is 1, ON, YES, TRUE, Y, or a
|
|
|
|
# non-zero number. False if the constant is 0, OFF, NO,
|
|
|
|
# FALSE, N, IGNORE, NOTFOUND, the empty string, or ends in
|
|
|
|
# the suffix -NOTFOUND. Named boolean constants are
|
|
|
|
# case-insensitive. If the argument is not one of these
|
|
|
|
# constants, it is treated as a variable."
|
|
|
|
#
|
|
|
|
# https://cmake.org/cmake/help/v3.0/command/if.html
|
|
|
|
val = val.upper()
|
|
|
|
if val in ('ON', 'YES', 'TRUE', 'Y'):
|
|
|
|
return 1
|
|
|
|
elif val in ('OFF', 'NO', 'FALSE', 'N', 'IGNORE', 'NOTFOUND', ''):
|
|
|
|
return 0
|
|
|
|
elif val.endswith('-NOTFOUND'):
|
|
|
|
return 0
|
|
|
|
else:
|
|
|
|
try:
|
|
|
|
v = int(val)
|
|
|
|
return v != 0
|
|
|
|
except ValueError as exc:
|
|
|
|
raise ValueError('invalid bool {}'.format(val)) from exc
|
|
|
|
|
|
|
|
@classmethod
|
|
|
|
def from_line(cls, line, line_no):
|
|
|
|
# Comments can only occur at the beginning of a line.
|
|
|
|
# (The value of an entry could contain a comment character).
|
|
|
|
if line.startswith('//') or line.startswith('#'):
|
|
|
|
return None
|
|
|
|
|
|
|
|
# Whitespace-only lines do not contain cache entries.
|
|
|
|
if not line.strip():
|
|
|
|
return None
|
|
|
|
|
|
|
|
m = cls.CACHE_ENTRY.match(line)
|
|
|
|
if not m:
|
|
|
|
return None
|
|
|
|
|
|
|
|
name, type_, value = (m.group(g) for g in ('name', 'type', 'value'))
|
|
|
|
if type_ == 'BOOL':
|
|
|
|
try:
|
|
|
|
value = cls._to_bool(value)
|
|
|
|
except ValueError as exc:
|
|
|
|
args = exc.args + ('on line {}: {}'.format(line_no, line),)
|
|
|
|
raise ValueError(args) from exc
|
2019-06-22 17:04:10 +02:00
|
|
|
elif type_ in ['STRING','INTERNAL']:
|
2019-01-09 14:46:42 +01:00
|
|
|
# If the value is a CMake list (i.e. is a string which
|
|
|
|
# contains a ';'), convert to a Python list.
|
|
|
|
if ';' in value:
|
|
|
|
value = value.split(';')
|
|
|
|
|
|
|
|
return CMakeCacheEntry(name, value)
|
|
|
|
|
|
|
|
def __init__(self, name, value):
|
|
|
|
self.name = name
|
|
|
|
self.value = value
|
|
|
|
|
|
|
|
def __str__(self):
|
|
|
|
fmt = 'CMakeCacheEntry(name={}, value={})'
|
|
|
|
return fmt.format(self.name, self.value)
|
|
|
|
|
|
|
|
|
|
|
|
class CMakeCache:
|
|
|
|
'''Parses and represents a CMake cache file.'''
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def from_file(cache_file):
|
|
|
|
return CMakeCache(cache_file)
|
|
|
|
|
|
|
|
def __init__(self, cache_file):
|
|
|
|
self.cache_file = cache_file
|
|
|
|
self.load(cache_file)
|
|
|
|
|
|
|
|
def load(self, cache_file):
|
|
|
|
entries = []
|
|
|
|
with open(cache_file, 'r') as cache:
|
|
|
|
for line_no, line in enumerate(cache):
|
|
|
|
entry = CMakeCacheEntry.from_line(line, line_no)
|
|
|
|
if entry:
|
|
|
|
entries.append(entry)
|
|
|
|
self._entries = OrderedDict((e.name, e) for e in entries)
|
|
|
|
|
|
|
|
def get(self, name, default=None):
|
|
|
|
entry = self._entries.get(name)
|
|
|
|
if entry is not None:
|
|
|
|
return entry.value
|
|
|
|
else:
|
|
|
|
return default
|
|
|
|
|
|
|
|
def get_list(self, name, default=None):
|
|
|
|
if default is None:
|
|
|
|
default = []
|
|
|
|
entry = self._entries.get(name)
|
|
|
|
if entry is not None:
|
|
|
|
value = entry.value
|
|
|
|
if isinstance(value, list):
|
|
|
|
return value
|
|
|
|
elif isinstance(value, str):
|
|
|
|
return [value] if value else []
|
|
|
|
else:
|
|
|
|
msg = 'invalid value {} type {}'
|
|
|
|
raise RuntimeError(msg.format(value, type(value)))
|
|
|
|
else:
|
|
|
|
return default
|
|
|
|
|
|
|
|
def __contains__(self, name):
|
|
|
|
return name in self._entries
|
|
|
|
|
|
|
|
def __getitem__(self, name):
|
|
|
|
return self._entries[name].value
|
|
|
|
|
|
|
|
def __setitem__(self, name, entry):
|
|
|
|
if not isinstance(entry, CMakeCacheEntry):
|
|
|
|
msg = 'improper type {} for value {}, expecting CMakeCacheEntry'
|
|
|
|
raise TypeError(msg.format(type(entry), entry))
|
|
|
|
self._entries[name] = entry
|
|
|
|
|
|
|
|
def __delitem__(self, name):
|
|
|
|
del self._entries[name]
|
|
|
|
|
|
|
|
def __iter__(self):
|
|
|
|
return iter(self._entries.values())
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
class SanityCheckException(Exception):
|
|
|
|
pass
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
class SanityRuntimeError(SanityCheckException):
|
|
|
|
pass
|
|
|
|
|
|
|
|
class ConfigurationError(SanityCheckException):
|
|
|
|
def __init__(self, cfile, message):
|
2019-06-22 17:04:10 +02:00
|
|
|
SanityCheckException.__init__(self, cfile + ": " + message)
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
class BuildError(SanityCheckException):
|
2015-07-17 21:03:52 +02:00
|
|
|
pass
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
class ExecutionError(SanityCheckException):
|
2015-07-17 21:03:52 +02:00
|
|
|
pass
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2016-11-29 19:43:40 +01:00
|
|
|
log_file = None
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
# Debug Functions
|
2019-04-11 14:38:21 +02:00
|
|
|
def info(what, show_time=True):
|
|
|
|
if options.timestamps and show_time:
|
|
|
|
date = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
|
|
|
what = "{}: {}".format(date, what)
|
2016-02-22 22:28:10 +01:00
|
|
|
sys.stdout.write(what + "\n")
|
2017-10-27 13:53:24 +02:00
|
|
|
sys.stdout.flush()
|
2016-11-29 19:43:40 +01:00
|
|
|
if log_file:
|
|
|
|
log_file.write(what + "\n")
|
|
|
|
log_file.flush()
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def error(what):
|
2019-04-11 14:38:21 +02:00
|
|
|
if options.timestamps:
|
|
|
|
date = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")
|
|
|
|
what = "{}: {}".format(date, what)
|
2015-07-17 21:03:52 +02:00
|
|
|
sys.stderr.write(COLOR_RED + what + COLOR_NORMAL + "\n")
|
2016-11-29 19:43:40 +01:00
|
|
|
if log_file:
|
|
|
|
log_file(what + "\n")
|
|
|
|
log_file.flush()
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2016-02-22 22:28:10 +01:00
|
|
|
def debug(what):
|
|
|
|
if VERBOSE >= 1:
|
|
|
|
info(what)
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def verbose(what):
|
|
|
|
if VERBOSE >= 2:
|
2016-02-22 22:28:10 +01:00
|
|
|
info(what)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-24 02:20:27 +01:00
|
|
|
class HarnessImporter:
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2017-12-24 02:20:27 +01:00
|
|
|
def __init__(self, name):
|
|
|
|
sys.path.insert(0, os.path.join(ZEPHYR_BASE, "scripts/sanity_chk"))
|
|
|
|
module = __import__("harness")
|
|
|
|
if name:
|
|
|
|
my_class = getattr(module, name)
|
|
|
|
else:
|
|
|
|
my_class = getattr(module, "Test")
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2017-12-24 02:20:27 +01:00
|
|
|
self.instance = my_class()
|
|
|
|
|
|
|
|
class Handler:
|
2019-04-12 05:20:39 +02:00
|
|
|
def __init__(self, instance, type_str="build"):
|
2016-08-22 14:03:46 +02:00
|
|
|
"""Constructor
|
|
|
|
|
|
|
|
"""
|
|
|
|
self.lock = threading.Lock()
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2016-08-22 14:03:46 +02:00
|
|
|
self.state = "waiting"
|
2018-07-07 01:20:23 +02:00
|
|
|
self.run = False
|
2019-06-22 17:04:10 +02:00
|
|
|
self.duration = 0
|
2019-04-12 05:20:39 +02:00
|
|
|
self.type_str = type_str
|
2016-08-22 14:03:46 +02:00
|
|
|
|
2018-07-07 13:09:01 +02:00
|
|
|
self.binary = None
|
2019-01-07 16:40:24 +01:00
|
|
|
self.pid_fn = None
|
2018-07-07 15:45:53 +02:00
|
|
|
self.call_make_run = False
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2018-02-22 13:44:16 +01:00
|
|
|
self.name = instance.name
|
|
|
|
self.instance = instance
|
2019-06-22 17:04:10 +02:00
|
|
|
self.timeout = instance.testcase.timeout
|
|
|
|
self.sourcedir = instance.testcase.source_dir
|
|
|
|
self.build_dir = instance.build_dir
|
|
|
|
self.log = os.path.join(self.build_dir, "handler.log")
|
2018-02-22 13:44:16 +01:00
|
|
|
self.returncode = 0
|
2019-06-22 17:04:10 +02:00
|
|
|
self.set_state("running", self.duration)
|
|
|
|
|
|
|
|
self.args = []
|
2018-02-22 13:44:16 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def set_state(self, state, duration):
|
2016-08-22 14:03:46 +02:00
|
|
|
self.lock.acquire()
|
|
|
|
self.state = state
|
2019-06-22 17:04:10 +02:00
|
|
|
self.duration = duration
|
2016-08-22 14:03:46 +02:00
|
|
|
self.lock.release()
|
|
|
|
|
|
|
|
def get_state(self):
|
|
|
|
self.lock.acquire()
|
2019-06-22 17:04:10 +02:00
|
|
|
ret = (self.state, self.duration)
|
2016-08-22 14:03:46 +02:00
|
|
|
self.lock.release()
|
|
|
|
return ret
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def record(self, harness):
|
|
|
|
if harness.recording:
|
|
|
|
filename = os.path.join(options.outdir,
|
|
|
|
self.instance.platform.name,
|
|
|
|
self.instance.testcase.name, "recording.csv")
|
|
|
|
with open(filename, "at") as csvfile:
|
|
|
|
cw = csv.writer(csvfile, harness.fieldnames, lineterminator=os.linesep)
|
|
|
|
cw.writerow(harness.fieldnames)
|
|
|
|
for instance in harness.recording:
|
|
|
|
cw.writerow(instance)
|
|
|
|
|
2018-07-07 13:09:01 +02:00
|
|
|
class BinaryHandler(Handler):
|
2019-04-12 05:20:39 +02:00
|
|
|
def __init__(self, instance, type_str):
|
2018-07-07 13:09:01 +02:00
|
|
|
"""Constructor
|
|
|
|
|
|
|
|
@param instance Test Instance
|
|
|
|
"""
|
2019-04-12 05:20:39 +02:00
|
|
|
super().__init__(instance, type_str)
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2018-12-11 15:13:05 +01:00
|
|
|
self.terminated = False
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2019-01-07 16:40:24 +01:00
|
|
|
def try_kill_process_by_pid(self):
|
2019-06-22 17:04:10 +02:00
|
|
|
if self.pid_fn:
|
2019-01-07 16:40:24 +01:00
|
|
|
pid = int(open(self.pid_fn).read())
|
|
|
|
os.unlink(self.pid_fn)
|
|
|
|
self.pid_fn = None # clear so we don't try to kill the binary twice
|
|
|
|
try:
|
|
|
|
os.kill(pid, signal.SIGTERM)
|
|
|
|
except ProcessLookupError:
|
|
|
|
pass
|
|
|
|
|
2019-12-12 11:38:42 +01:00
|
|
|
def terminate(self, proc):
|
|
|
|
# encapsulate terminate functionality so we do it consistently where ever
|
|
|
|
# we might want to terminate the proc. We need try_kill_process_by_pid
|
|
|
|
# because of both how newer ninja (1.6.0 or greater) and .NET / renode
|
|
|
|
# work. Newer ninja's don't seem to pass SIGTERM down to the children
|
|
|
|
# so we need to use try_kill_process_by_pid.
|
|
|
|
self.try_kill_process_by_pid()
|
|
|
|
proc.terminate()
|
|
|
|
self.terminated = True
|
|
|
|
|
2018-07-07 13:09:01 +02:00
|
|
|
def _output_reader(self, proc, harness):
|
|
|
|
log_out_fp = open(self.log, "wt")
|
|
|
|
for line in iter(proc.stdout.readline, b''):
|
|
|
|
verbose("OUTPUT: {0}".format(line.decode('utf-8').rstrip()))
|
|
|
|
log_out_fp.write(line.decode('utf-8'))
|
|
|
|
log_out_fp.flush()
|
|
|
|
harness.handle(line.decode('utf-8').rstrip())
|
|
|
|
if harness.state:
|
2018-12-11 15:13:05 +01:00
|
|
|
try:
|
|
|
|
#POSIX arch based ztests end on their own,
|
|
|
|
#so let's give it up to 100ms to do so
|
|
|
|
proc.wait(0.1)
|
|
|
|
except subprocess.TimeoutExpired:
|
2019-12-12 11:38:42 +01:00
|
|
|
self.terminate(proc)
|
2018-07-07 13:09:01 +02:00
|
|
|
break
|
|
|
|
|
|
|
|
log_out_fp.close()
|
|
|
|
|
|
|
|
def handle(self):
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
harness_name = self.instance.testcase.harness.capitalize()
|
2018-07-07 13:09:01 +02:00
|
|
|
harness_import = HarnessImporter(harness_name)
|
|
|
|
harness = harness_import.instance
|
|
|
|
harness.configure(self.instance)
|
|
|
|
|
2018-07-07 15:45:53 +02:00
|
|
|
if self.call_make_run:
|
2019-06-22 17:04:10 +02:00
|
|
|
command = [get_generator()[0], "run"]
|
2018-07-07 15:45:53 +02:00
|
|
|
else:
|
|
|
|
command = [self.binary]
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2019-10-11 16:32:45 +02:00
|
|
|
run_valgrind = False
|
|
|
|
if options.enable_valgrind and shutil.which("valgrind"):
|
2018-07-07 13:09:01 +02:00
|
|
|
command = ["valgrind", "--error-exitcode=2",
|
2018-12-12 17:41:04 +01:00
|
|
|
"--leak-check=full",
|
|
|
|
"--suppressions="+ZEPHYR_BASE+"/scripts/valgrind.supp",
|
2019-06-22 17:04:10 +02:00
|
|
|
"--log-file="+self.build_dir+"/valgrind.log"
|
2018-12-12 17:41:04 +01:00
|
|
|
] + command
|
2019-10-11 16:32:45 +02:00
|
|
|
run_valgrind = True
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2019-04-30 23:11:29 +02:00
|
|
|
verbose("Spawning process: " +
|
2019-01-03 19:18:35 +01:00
|
|
|
" ".join(shlex.quote(word) for word in command) + os.linesep +
|
2019-06-22 17:04:10 +02:00
|
|
|
"Spawning process in directory: " + self.build_dir)
|
2019-07-02 09:59:35 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
start_time = time.time()
|
2019-07-02 09:59:35 +02:00
|
|
|
|
2019-11-05 14:55:39 +01:00
|
|
|
env = os.environ.copy()
|
2019-09-12 00:03:35 +02:00
|
|
|
if options.enable_asan:
|
|
|
|
env["ASAN_OPTIONS"] = "log_path=stdout:" + \
|
|
|
|
env.get("ASAN_OPTIONS", "")
|
|
|
|
if not options.enable_lsan:
|
|
|
|
env["ASAN_OPTIONS"] += "detect_leaks=0"
|
|
|
|
with subprocess.Popen(command, stdout=subprocess.PIPE,
|
|
|
|
stderr=subprocess.PIPE, cwd=self.build_dir, env=env) as proc:
|
2019-04-30 23:11:29 +02:00
|
|
|
verbose("Spawning BinaryHandler Thread for %s" % self.name)
|
2019-10-16 01:10:49 +02:00
|
|
|
t = threading.Thread(target=self._output_reader, args=(proc, harness, ), daemon=True)
|
2018-07-07 13:09:01 +02:00
|
|
|
t.start()
|
|
|
|
t.join(self.timeout)
|
|
|
|
if t.is_alive():
|
2019-12-12 11:38:42 +01:00
|
|
|
self.terminate(proc)
|
2018-07-07 13:09:01 +02:00
|
|
|
t.join()
|
|
|
|
proc.wait()
|
|
|
|
self.returncode = proc.returncode
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler_time = time.time() - start_time
|
2019-07-02 09:59:35 +02:00
|
|
|
|
2018-07-07 13:09:01 +02:00
|
|
|
if options.enable_coverage:
|
2019-06-22 17:04:10 +02:00
|
|
|
subprocess.call(["GCOV_PREFIX=" + self.build_dir,
|
|
|
|
"gcov", self.sourcedir, "-b", "-s", self.build_dir], shell=True)
|
2018-07-07 13:09:01 +02:00
|
|
|
|
2019-01-07 16:40:24 +01:00
|
|
|
self.try_kill_process_by_pid()
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# FIXME: This is needed when killing the simulator, the console is
|
2018-07-07 15:45:53 +02:00
|
|
|
# garbled and needs to be reset. Did not find a better way to do that.
|
|
|
|
|
|
|
|
subprocess.call(["stty", "sane"])
|
2018-07-07 13:09:01 +02:00
|
|
|
self.instance.results = harness.tests
|
2019-10-11 16:32:45 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if not self.terminated and self.returncode != 0:
|
2018-12-11 15:13:05 +01:00
|
|
|
#When a process is killed, the default handler returns 128 + SIGTERM
|
|
|
|
#so in that case the return code itself is not meaningful
|
2019-10-18 23:20:44 +02:00
|
|
|
self.set_state("failed", handler_time)
|
2019-10-11 16:32:45 +02:00
|
|
|
self.instance.reason = "Handler Error"
|
|
|
|
elif run_valgrind and self.returncode == 2:
|
|
|
|
self.set_state("failed", handler_time)
|
|
|
|
self.instance.reason = "Valgrind error"
|
2018-12-11 15:13:05 +01:00
|
|
|
elif harness.state:
|
2019-06-22 17:04:10 +02:00
|
|
|
self.set_state(harness.state, handler_time)
|
2018-07-07 13:09:01 +02:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
self.set_state("timeout", handler_time)
|
|
|
|
self.instance.reason = "Handler timeout"
|
|
|
|
|
|
|
|
self.record(harness)
|
2018-02-19 17:57:03 +01:00
|
|
|
|
|
|
|
class DeviceHandler(Handler):
|
|
|
|
|
2019-04-12 05:20:39 +02:00
|
|
|
def __init__(self, instance, type_str):
|
2018-02-19 17:57:03 +01:00
|
|
|
"""Constructor
|
|
|
|
|
|
|
|
@param instance Test Instance
|
|
|
|
"""
|
2019-04-12 05:20:39 +02:00
|
|
|
super().__init__(instance, type_str)
|
2018-02-19 17:57:03 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.suite = None
|
|
|
|
|
2019-02-07 23:53:39 +01:00
|
|
|
def monitor_serial(self, ser, halt_fileno, harness):
|
2018-07-07 01:58:18 +02:00
|
|
|
log_out_fp = open(self.log, "wt")
|
2018-02-19 17:57:03 +01:00
|
|
|
|
2019-02-07 23:53:39 +01:00
|
|
|
ser_fileno = ser.fileno()
|
|
|
|
readlist = [halt_fileno, ser_fileno]
|
|
|
|
|
2018-02-19 17:57:03 +01:00
|
|
|
while ser.isOpen():
|
2019-02-07 23:53:39 +01:00
|
|
|
readable, _, _ = select.select(readlist, [], [], self.timeout)
|
|
|
|
|
|
|
|
if halt_fileno in readable:
|
|
|
|
verbose('halted')
|
|
|
|
ser.close()
|
|
|
|
break
|
|
|
|
if ser_fileno not in readable:
|
|
|
|
continue # Timeout.
|
|
|
|
|
|
|
|
serial_line = None
|
2018-04-08 20:30:16 +02:00
|
|
|
try:
|
|
|
|
serial_line = ser.readline()
|
|
|
|
except TypeError:
|
|
|
|
pass
|
2019-06-22 17:04:10 +02:00
|
|
|
except serial.SerialException:
|
2019-03-03 16:36:35 +01:00
|
|
|
ser.close()
|
|
|
|
break
|
2018-04-08 20:30:16 +02:00
|
|
|
|
2019-02-07 23:53:39 +01:00
|
|
|
# Just because ser_fileno has data doesn't mean an entire line
|
|
|
|
# is available yet.
|
2018-02-19 17:57:03 +01:00
|
|
|
if serial_line:
|
2018-02-22 13:44:16 +01:00
|
|
|
sl = serial_line.decode('utf-8', 'ignore')
|
|
|
|
verbose("DEVICE: {0}".format(sl.rstrip()))
|
|
|
|
|
|
|
|
log_out_fp.write(sl)
|
|
|
|
log_out_fp.flush()
|
|
|
|
harness.handle(sl.rstrip())
|
2019-02-07 23:53:39 +01:00
|
|
|
|
2018-02-19 17:57:03 +01:00
|
|
|
if harness.state:
|
|
|
|
ser.close()
|
|
|
|
break
|
|
|
|
|
|
|
|
log_out_fp.close()
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def device_is_available(self, device):
|
|
|
|
for i in self.suite.connected_hardware:
|
|
|
|
if i['platform'] == device and i['available'] and i['connected']:
|
|
|
|
return True
|
|
|
|
|
|
|
|
return False
|
|
|
|
|
|
|
|
def get_available_device(self, device):
|
|
|
|
for i in self.suite.connected_hardware:
|
|
|
|
if i['platform'] == device and i['available']:
|
|
|
|
i['available'] = False
|
|
|
|
i['counter'] += 1
|
|
|
|
return i
|
|
|
|
|
|
|
|
return None
|
|
|
|
|
|
|
|
def make_device_available(self, serial):
|
|
|
|
with hw_map_local:
|
|
|
|
for i in self.suite.connected_hardware:
|
|
|
|
if i['serial'] == serial:
|
|
|
|
i['available'] = True
|
|
|
|
|
2018-02-19 17:57:03 +01:00
|
|
|
def handle(self):
|
|
|
|
out_state = "failed"
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.west_flash:
|
|
|
|
command = ["west", "flash", "--skip-rebuild", "-d", self.build_dir]
|
|
|
|
if options.west_runner:
|
2019-06-18 18:37:46 +02:00
|
|
|
command.append("--runner")
|
|
|
|
command.append(options.west_runner)
|
2019-09-02 15:25:20 +02:00
|
|
|
# There are three ways this option is used.
|
2019-02-08 17:09:04 +01:00
|
|
|
# 1) bare: --west-flash
|
|
|
|
# This results in options.west_flash == []
|
|
|
|
# 2) with a value: --west-flash="--board-id=42"
|
|
|
|
# This results in options.west_flash == "--board-id=42"
|
2019-09-02 15:25:20 +02:00
|
|
|
# 3) Multiple values: --west-flash="--board-id=42,--erase"
|
|
|
|
# This results in options.west_flash == "--board-id=42 --erase"
|
2019-02-08 17:09:04 +01:00
|
|
|
if options.west_flash != []:
|
|
|
|
command.append('--')
|
2019-09-02 15:25:20 +02:00
|
|
|
command.extend(options.west_flash.split(','))
|
2018-02-22 13:44:16 +01:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
command = [get_generator()[0], "-C", self.build_dir, "flash"]
|
|
|
|
|
|
|
|
|
|
|
|
while not self.device_is_available(self.instance.platform.name):
|
|
|
|
time.sleep(1)
|
|
|
|
|
|
|
|
hardware = self.get_available_device(self.instance.platform.name)
|
|
|
|
|
|
|
|
runner = hardware.get('runner', None)
|
|
|
|
if runner:
|
2019-11-21 18:55:26 +01:00
|
|
|
board_id = hardware.get("probe_id", hardware.get("id", None))
|
2019-06-22 17:04:10 +02:00
|
|
|
product = hardware.get("product", None)
|
|
|
|
command = ["west", "flash", "--skip-rebuild", "-d", self.build_dir]
|
|
|
|
command.append("--runner")
|
|
|
|
command.append(hardware.get('runner', None))
|
|
|
|
if runner == "pyocd":
|
|
|
|
command.append("--board-id")
|
|
|
|
command.append(board_id)
|
|
|
|
elif runner == "nrfjprog":
|
|
|
|
command.append('--')
|
|
|
|
command.append("--snr")
|
|
|
|
command.append(board_id)
|
|
|
|
elif runner == "openocd" and product == "STM32 STLink":
|
|
|
|
command.append('--')
|
|
|
|
command.append("--cmd-pre-init")
|
|
|
|
command.append("hla_serial %s" %(board_id))
|
|
|
|
elif runner == "openocd" and product == "EDBG CMSIS-DAP":
|
|
|
|
command.append('--')
|
|
|
|
command.append("--cmd-pre-init")
|
|
|
|
command.append("cmsis_dap_serial %s" %(board_id))
|
|
|
|
elif runner == "jlink":
|
|
|
|
command.append("--tool-opt=-SelectEmuBySN %s" %(board_id))
|
|
|
|
|
|
|
|
serial_device = hardware['serial']
|
|
|
|
|
|
|
|
try:
|
|
|
|
ser = serial.Serial(
|
|
|
|
serial_device,
|
|
|
|
baudrate=115200,
|
|
|
|
parity=serial.PARITY_NONE,
|
|
|
|
stopbits=serial.STOPBITS_ONE,
|
|
|
|
bytesize=serial.EIGHTBITS,
|
|
|
|
timeout=self.timeout
|
|
|
|
)
|
|
|
|
except serial.SerialException as e:
|
|
|
|
self.set_state("failed", 0)
|
|
|
|
error("Serial device err: %s" %(str(e)))
|
|
|
|
self.make_device_available(serial_device)
|
|
|
|
return
|
2018-02-19 17:57:03 +01:00
|
|
|
|
|
|
|
ser.flush()
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
harness_name = self.instance.testcase.harness.capitalize()
|
2018-02-19 17:57:03 +01:00
|
|
|
harness_import = HarnessImporter(harness_name)
|
|
|
|
harness = harness_import.instance
|
|
|
|
harness.configure(self.instance)
|
2019-06-22 17:04:10 +02:00
|
|
|
read_pipe, write_pipe = os.pipe()
|
|
|
|
start_time = time.time()
|
2018-02-19 17:57:03 +01:00
|
|
|
|
2019-02-07 23:53:39 +01:00
|
|
|
t = threading.Thread(target=self.monitor_serial, daemon=True,
|
2019-06-22 17:04:10 +02:00
|
|
|
args=(ser, read_pipe, harness))
|
2018-02-19 17:57:03 +01:00
|
|
|
t.start()
|
|
|
|
|
2019-02-08 17:09:04 +01:00
|
|
|
logging.debug('Flash command: %s', command)
|
2018-04-08 20:30:16 +02:00
|
|
|
try:
|
2019-06-22 17:04:10 +02:00
|
|
|
if VERBOSE and not runner:
|
2019-02-07 23:50:55 +01:00
|
|
|
subprocess.check_call(command)
|
|
|
|
else:
|
|
|
|
subprocess.check_output(command, stderr=subprocess.PIPE)
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2018-04-08 20:30:16 +02:00
|
|
|
except subprocess.CalledProcessError:
|
2019-06-22 17:04:10 +02:00
|
|
|
os.write(write_pipe, b'x') # halt the thread
|
2018-02-19 17:57:03 +01:00
|
|
|
|
|
|
|
t.join(self.timeout)
|
|
|
|
if t.is_alive():
|
|
|
|
out_state = "timeout"
|
|
|
|
|
|
|
|
if ser.isOpen():
|
|
|
|
ser.close()
|
|
|
|
|
2018-02-22 13:44:16 +01:00
|
|
|
if out_state == "timeout":
|
2019-06-22 17:04:10 +02:00
|
|
|
for c in self.instance.testcase.cases:
|
2018-02-22 13:44:16 +01:00
|
|
|
if c not in harness.tests:
|
|
|
|
harness.tests[c] = "BLOCK"
|
2018-04-08 20:30:16 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler_time = time.time() - start_time
|
|
|
|
|
2018-04-08 20:30:16 +02:00
|
|
|
self.instance.results = harness.tests
|
2018-02-19 17:57:03 +01:00
|
|
|
if harness.state:
|
2019-06-22 17:04:10 +02:00
|
|
|
self.set_state(harness.state, handler_time)
|
2018-02-19 17:57:03 +01:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
self.set_state(out_state, handler_time)
|
2018-02-19 17:57:03 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.make_device_available(serial_device)
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-11-12 17:07:15 +01:00
|
|
|
self.record(harness)
|
|
|
|
|
2016-08-22 14:03:46 +02:00
|
|
|
class QEMUHandler(Handler):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Spawns a thread to monitor QEMU output from pipes
|
|
|
|
|
2017-08-03 16:03:02 +02:00
|
|
|
We pass QEMU_PIPE to 'make run' and monitor the pipes for output.
|
2015-07-17 21:03:52 +02:00
|
|
|
We need to do this as once qemu starts, it runs forever until killed.
|
|
|
|
Test cases emit special messages to the console as they run, we check
|
|
|
|
for these to collect whether the test passed or failed.
|
|
|
|
"""
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
def __init__(self, instance, type_str):
|
|
|
|
"""Constructor
|
|
|
|
|
|
|
|
@param instance Test instance
|
|
|
|
"""
|
|
|
|
|
|
|
|
super().__init__(instance, type_str)
|
|
|
|
self.fifo_fn = os.path.join(instance.build_dir, "qemu-fifo")
|
|
|
|
|
|
|
|
self.pid_fn = os.path.join(instance.build_dir, "qemu.pid")
|
|
|
|
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
@staticmethod
|
2017-12-24 02:20:27 +01:00
|
|
|
def _thread(handler, timeout, outdir, logfile, fifo_fn, pid_fn, results, harness):
|
2015-07-17 21:03:52 +02:00
|
|
|
fifo_in = fifo_fn + ".in"
|
|
|
|
fifo_out = fifo_fn + ".out"
|
|
|
|
|
|
|
|
# These in/out nodes are named from QEMU's perspective, not ours
|
|
|
|
if os.path.exists(fifo_in):
|
|
|
|
os.unlink(fifo_in)
|
|
|
|
os.mkfifo(fifo_in)
|
|
|
|
if os.path.exists(fifo_out):
|
|
|
|
os.unlink(fifo_out)
|
|
|
|
os.mkfifo(fifo_out)
|
|
|
|
|
|
|
|
# We don't do anything with out_fp but we need to open it for
|
|
|
|
# writing so that QEMU doesn't block, due to the way pipes work
|
|
|
|
out_fp = open(fifo_in, "wb")
|
|
|
|
# Disable internal buffering, we don't
|
|
|
|
# want read() or poll() to ever block if there is data in there
|
|
|
|
in_fp = open(fifo_out, "rb", buffering=0)
|
2016-02-22 22:28:10 +01:00
|
|
|
log_out_fp = open(logfile, "wt")
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
start_time = time.time()
|
|
|
|
timeout_time = start_time + timeout
|
|
|
|
p = select.poll()
|
|
|
|
p.register(in_fp, select.POLLIN)
|
2018-08-29 07:45:38 +02:00
|
|
|
out_state = None
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
line = ""
|
2019-01-17 23:11:37 +01:00
|
|
|
timeout_extended = False
|
2015-07-17 21:03:52 +02:00
|
|
|
while True:
|
|
|
|
this_timeout = int((timeout_time - time.time()) * 1000)
|
|
|
|
if this_timeout < 0 or not p.poll(this_timeout):
|
2018-08-29 07:45:38 +02:00
|
|
|
if not out_state:
|
|
|
|
out_state = "timeout"
|
2015-07-17 21:03:52 +02:00
|
|
|
break
|
|
|
|
|
2016-08-09 22:11:53 +02:00
|
|
|
try:
|
|
|
|
c = in_fp.read(1).decode("utf-8")
|
|
|
|
except UnicodeDecodeError:
|
|
|
|
# Test is writing something weird, fail
|
|
|
|
out_state = "unexpected byte"
|
|
|
|
break
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
if c == "":
|
|
|
|
# EOF, this shouldn't happen unless QEMU crashes
|
|
|
|
out_state = "unexpected eof"
|
|
|
|
break
|
|
|
|
line = line + c
|
|
|
|
if c != "\n":
|
|
|
|
continue
|
|
|
|
|
2016-08-22 14:03:46 +02:00
|
|
|
# line contains a full line of data output from QEMU
|
2015-07-17 21:03:52 +02:00
|
|
|
log_out_fp.write(line)
|
|
|
|
log_out_fp.flush()
|
|
|
|
line = line.strip()
|
|
|
|
verbose("QEMU: %s" % line)
|
|
|
|
|
2017-12-24 02:20:27 +01:00
|
|
|
harness.handle(line)
|
|
|
|
if harness.state:
|
2018-08-29 07:45:38 +02:00
|
|
|
# if we have registered a fail make sure the state is not
|
|
|
|
# overridden by a false success message coming from the
|
|
|
|
# testsuite
|
|
|
|
if out_state != 'failed':
|
|
|
|
out_state = harness.state
|
|
|
|
|
|
|
|
# if we get some state, that means test is doing well, we reset
|
2019-07-03 21:49:42 +02:00
|
|
|
# the timeout and wait for 2 more seconds to catch anything
|
|
|
|
# printed late. We wait much longer if code
|
2019-07-03 02:03:48 +02:00
|
|
|
# coverage is enabled since dumping this information can
|
|
|
|
# take some time.
|
2019-01-25 15:37:38 +01:00
|
|
|
if not timeout_extended or harness.capture_coverage:
|
|
|
|
timeout_extended= True
|
|
|
|
if harness.capture_coverage:
|
2019-07-02 00:01:48 +02:00
|
|
|
timeout_time = time.time() + 30
|
2019-01-25 15:37:38 +01:00
|
|
|
else:
|
2019-01-17 23:11:37 +01:00
|
|
|
timeout_time = time.time() + 2
|
2015-07-17 21:03:52 +02:00
|
|
|
line = ""
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler.record(harness)
|
|
|
|
|
|
|
|
handler_time = time.time() - start_time
|
2015-07-17 21:03:52 +02:00
|
|
|
verbose("QEMU complete (%s) after %f seconds" %
|
2019-06-22 17:04:10 +02:00
|
|
|
(out_state, handler_time))
|
|
|
|
handler.set_state(out_state, handler_time)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
log_out_fp.close()
|
|
|
|
out_fp.close()
|
|
|
|
in_fp.close()
|
2019-04-11 17:40:09 +02:00
|
|
|
if os.path.exists(pid_fn):
|
|
|
|
pid = int(open(pid_fn).read())
|
|
|
|
os.unlink(pid_fn)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-04-11 17:40:09 +02:00
|
|
|
try:
|
|
|
|
if pid:
|
|
|
|
os.kill(pid, signal.SIGTERM)
|
|
|
|
except ProcessLookupError:
|
|
|
|
# Oh well, as long as it's dead! User probably sent Ctrl-C
|
|
|
|
pass
|
2016-02-22 22:28:10 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
os.unlink(fifo_in)
|
|
|
|
os.unlink(fifo_out)
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def handle(self):
|
2015-07-17 21:03:52 +02:00
|
|
|
self.results = {}
|
2018-07-07 01:20:23 +02:00
|
|
|
self.run = True
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
# We pass this to QEMU which looks for fifos with .in and .out
|
|
|
|
# suffixes.
|
2019-06-22 17:04:10 +02:00
|
|
|
self.fifo_fn = os.path.join(self.instance.build_dir, "qemu-fifo")
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.pid_fn = os.path.join(self.instance.build_dir, "qemu.pid")
|
2015-07-17 21:03:52 +02:00
|
|
|
if os.path.exists(self.pid_fn):
|
|
|
|
os.unlink(self.pid_fn)
|
|
|
|
|
2018-07-07 01:58:18 +02:00
|
|
|
self.log_fn = self.log
|
2017-12-24 02:20:27 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
harness_import = HarnessImporter(self.instance.testcase.harness.capitalize())
|
2017-12-24 02:20:27 +01:00
|
|
|
harness = harness_import.instance
|
2018-02-22 13:44:16 +01:00
|
|
|
harness.configure(self.instance)
|
|
|
|
self.thread = threading.Thread(name=self.name, target=QEMUHandler._thread,
|
2019-06-22 17:04:10 +02:00
|
|
|
args=(self, self.timeout, self.build_dir,
|
2016-08-22 14:03:46 +02:00
|
|
|
self.log_fn, self.fifo_fn,
|
2017-12-24 02:20:27 +01:00
|
|
|
self.pid_fn, self.results, harness))
|
2018-02-16 03:07:24 +01:00
|
|
|
|
|
|
|
self.instance.results = harness.tests
|
2015-07-17 21:03:52 +02:00
|
|
|
self.thread.daemon = True
|
2019-06-22 17:04:10 +02:00
|
|
|
verbose("Spawning QEMUHandler Thread for %s" % self.name)
|
2015-07-17 21:03:52 +02:00
|
|
|
self.thread.start()
|
2019-06-22 17:04:10 +02:00
|
|
|
subprocess.call(["stty", "sane"])
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-11-12 06:11:31 +01:00
|
|
|
verbose("Running %s (%s)" %(self.name, self.type_str))
|
|
|
|
command = [get_generator()[0]]
|
|
|
|
command += ["-C", self.build_dir, "run"]
|
|
|
|
|
|
|
|
with subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=self.build_dir) as proc:
|
|
|
|
verbose("Spawning QEMUHandler Thread for %s" % self.name)
|
|
|
|
proc.wait()
|
|
|
|
self.returncode = proc.returncode
|
|
|
|
|
|
|
|
if self.returncode != 0:
|
|
|
|
self.set_state("failed", 0)
|
|
|
|
self.instance.reason = "Exited with {}".format(self.returncode)
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def get_fifo(self):
|
|
|
|
return self.fifo_fn
|
|
|
|
|
|
|
|
class SizeCalculator:
|
2015-10-07 20:33:22 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
alloc_sections = [
|
|
|
|
"bss",
|
|
|
|
"noinit",
|
|
|
|
"app_bss",
|
|
|
|
"app_noinit",
|
|
|
|
"ccm_bss",
|
|
|
|
"ccm_noinit"
|
|
|
|
]
|
|
|
|
|
|
|
|
rw_sections = [
|
|
|
|
"datas",
|
|
|
|
"initlevel",
|
|
|
|
"exceptions",
|
|
|
|
"initshell",
|
|
|
|
"_static_thread_area",
|
|
|
|
"_k_timer_area",
|
|
|
|
"_k_mem_slab_area",
|
|
|
|
"_k_mem_pool_area",
|
|
|
|
"sw_isr_table",
|
|
|
|
"_k_sem_area",
|
|
|
|
"_k_mutex_area",
|
|
|
|
"app_shmem_regions",
|
|
|
|
"_k_fifo_area",
|
|
|
|
"_k_lifo_area",
|
|
|
|
"_k_stack_area",
|
|
|
|
"_k_msgq_area",
|
|
|
|
"_k_mbox_area",
|
|
|
|
"_k_pipe_area",
|
|
|
|
"net_if",
|
|
|
|
"net_if_dev",
|
|
|
|
"net_stack",
|
|
|
|
"net_l2_data",
|
|
|
|
"_k_queue_area",
|
|
|
|
"_net_buf_pool_area",
|
|
|
|
"app_datas",
|
|
|
|
"kobject_data",
|
|
|
|
"mmu_tables",
|
|
|
|
"app_pad",
|
|
|
|
"priv_stacks",
|
|
|
|
"ccm_data",
|
|
|
|
"usb_descriptor",
|
|
|
|
"usb_data", "usb_bos_desc",
|
|
|
|
'log_backends_sections',
|
|
|
|
'log_dynamic_sections',
|
|
|
|
'log_const_sections',
|
|
|
|
"app_smem",
|
|
|
|
'shell_root_cmds_sections',
|
|
|
|
'log_const_sections',
|
|
|
|
"font_entry_sections",
|
|
|
|
"priv_stacks_noinit",
|
|
|
|
"_TEXT_SECTION_NAME_2",
|
|
|
|
"_GCOV_BSS_SECTION_NAME",
|
|
|
|
"gcov",
|
|
|
|
"nocache"
|
|
|
|
]
|
2018-10-08 16:19:41 +02:00
|
|
|
|
2015-10-07 20:33:22 +02:00
|
|
|
# These get copied into RAM only on non-XIP
|
2019-06-22 17:04:10 +02:00
|
|
|
ro_sections = [
|
|
|
|
"text",
|
|
|
|
"ctors",
|
|
|
|
"init_array",
|
|
|
|
"reset",
|
|
|
|
"object_access",
|
|
|
|
"rodata",
|
|
|
|
"devconfig",
|
|
|
|
"net_l2",
|
|
|
|
"vector",
|
|
|
|
"sw_isr_table",
|
|
|
|
"_settings_handlers_area",
|
|
|
|
"_bt_channels_area",
|
|
|
|
"_bt_br_channels_area",
|
|
|
|
"_bt_services_area",
|
|
|
|
"vectors",
|
|
|
|
"net_socket_register",
|
|
|
|
"net_ppp_proto"
|
|
|
|
]
|
2015-10-07 20:33:22 +02:00
|
|
|
|
2016-11-29 21:21:59 +01:00
|
|
|
def __init__(self, filename, extra_sections):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Constructor
|
|
|
|
|
2015-08-17 22:16:11 +02:00
|
|
|
@param filename Path to the output binary
|
|
|
|
The <filename> is parsed by objdump to determine section sizes
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
|
|
|
# Make sure this is an ELF binary
|
2015-08-17 22:16:11 +02:00
|
|
|
with open(filename, "rb") as f:
|
2015-07-17 21:03:52 +02:00
|
|
|
magic = f.read(4)
|
|
|
|
|
2018-08-16 00:12:28 +02:00
|
|
|
try:
|
2019-06-22 17:04:10 +02:00
|
|
|
if magic != b'\x7fELF':
|
2018-08-16 00:12:28 +02:00
|
|
|
raise SanityRuntimeError("%s is not an ELF binary" % filename)
|
|
|
|
except Exception as e:
|
|
|
|
print(str(e))
|
|
|
|
sys.exit(2)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
# Search for CONFIG_XIP in the ELF's list of symbols using NM and AWK.
|
2017-12-05 21:28:44 +01:00
|
|
|
# GREP can not be used as it returns an error if the symbol is not
|
|
|
|
# found.
|
|
|
|
is_xip_command = "nm " + filename + \
|
|
|
|
" | awk '/CONFIG_XIP/ { print $3 }'"
|
|
|
|
is_xip_output = subprocess.check_output(
|
|
|
|
is_xip_command, shell=True, stderr=subprocess.STDOUT).decode(
|
|
|
|
"utf-8").strip()
|
2018-08-16 00:12:28 +02:00
|
|
|
try:
|
|
|
|
if is_xip_output.endswith("no symbols"):
|
|
|
|
raise SanityRuntimeError("%s has no symbol information" % filename)
|
|
|
|
except Exception as e:
|
|
|
|
print(str(e))
|
|
|
|
sys.exit(2)
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
self.is_xip = (len(is_xip_output) != 0)
|
|
|
|
|
2015-08-17 22:16:11 +02:00
|
|
|
self.filename = filename
|
2015-10-07 20:33:22 +02:00
|
|
|
self.sections = []
|
|
|
|
self.rom_size = 0
|
2015-07-17 21:03:52 +02:00
|
|
|
self.ram_size = 0
|
2016-11-29 21:21:59 +01:00
|
|
|
self.extra_sections = extra_sections
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
self._calculate_sizes()
|
|
|
|
|
|
|
|
def get_ram_size(self):
|
|
|
|
"""Get the amount of RAM the application will use up on the device
|
|
|
|
|
|
|
|
@return amount of RAM, in bytes
|
|
|
|
"""
|
2015-10-07 20:33:22 +02:00
|
|
|
return self.ram_size
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
def get_rom_size(self):
|
|
|
|
"""Get the size of the data that this application uses on device's flash
|
|
|
|
|
|
|
|
@return amount of ROM, in bytes
|
|
|
|
"""
|
2015-10-07 20:33:22 +02:00
|
|
|
return self.rom_size
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
def unrecognized_sections(self):
|
|
|
|
"""Get a list of sections inside the binary that weren't recognized
|
|
|
|
|
2017-06-16 21:32:42 +02:00
|
|
|
@return list of unrecognized section names
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
|
|
|
slist = []
|
2015-10-07 20:33:22 +02:00
|
|
|
for v in self.sections:
|
2015-07-17 21:03:52 +02:00
|
|
|
if not v["recognized"]:
|
2015-10-07 20:33:22 +02:00
|
|
|
slist.append(v["name"])
|
2015-07-17 21:03:52 +02:00
|
|
|
return slist
|
|
|
|
|
|
|
|
def _calculate_sizes(self):
|
|
|
|
""" Calculate RAM and ROM usage by section """
|
2015-08-17 22:16:11 +02:00
|
|
|
objdump_command = "objdump -h " + self.filename
|
2017-12-05 21:28:44 +01:00
|
|
|
objdump_output = subprocess.check_output(
|
|
|
|
objdump_command, shell=True).decode("utf-8").splitlines()
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
for line in objdump_output:
|
|
|
|
words = line.split()
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if not words: # Skip lines that are too short
|
2015-07-17 21:03:52 +02:00
|
|
|
continue
|
|
|
|
|
|
|
|
index = words[0]
|
2019-06-22 17:04:10 +02:00
|
|
|
if not index[0].isdigit(): # Skip lines that do not start
|
2015-07-17 21:03:52 +02:00
|
|
|
continue # with a digit
|
|
|
|
|
|
|
|
name = words[1] # Skip lines with section names
|
2019-06-22 17:04:10 +02:00
|
|
|
if name[0] == '.': # starting with '.'
|
2015-07-17 21:03:52 +02:00
|
|
|
continue
|
|
|
|
|
2015-10-07 20:33:22 +02:00
|
|
|
# TODO this doesn't actually reflect the size in flash or RAM as
|
|
|
|
# it doesn't include linker-imposed padding between sections.
|
|
|
|
# It is close though.
|
2015-07-17 21:03:52 +02:00
|
|
|
size = int(words[2], 16)
|
2015-10-07 23:25:51 +02:00
|
|
|
if size == 0:
|
|
|
|
continue
|
|
|
|
|
2015-10-07 20:33:22 +02:00
|
|
|
load_addr = int(words[4], 16)
|
|
|
|
virt_addr = int(words[3], 16)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
# Add section to memory use totals (for both non-XIP and XIP scenarios)
|
|
|
|
# Unrecognized section names are not included in the calculations.
|
|
|
|
recognized = True
|
2015-10-07 20:33:22 +02:00
|
|
|
if name in SizeCalculator.alloc_sections:
|
|
|
|
self.ram_size += size
|
|
|
|
stype = "alloc"
|
|
|
|
elif name in SizeCalculator.rw_sections:
|
|
|
|
self.ram_size += size
|
|
|
|
self.rom_size += size
|
|
|
|
stype = "rw"
|
|
|
|
elif name in SizeCalculator.ro_sections:
|
|
|
|
self.rom_size += size
|
|
|
|
if not self.is_xip:
|
|
|
|
self.ram_size += size
|
|
|
|
stype = "ro"
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
2015-10-07 20:33:22 +02:00
|
|
|
stype = "unknown"
|
2016-11-29 21:21:59 +01:00
|
|
|
if name not in self.extra_sections:
|
|
|
|
recognized = False
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
self.sections.append({"name": name, "load_addr": load_addr,
|
|
|
|
"size": size, "virt_addr": virt_addr,
|
|
|
|
"type": stype, "recognized": recognized})
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
|
|
|
|
# "list" - List of strings
|
|
|
|
# "list:<type>" - List of <type>
|
|
|
|
# "set" - Set of unordered, unique strings
|
|
|
|
# "set:<type>" - Set of <type>
|
|
|
|
# "float" - Floating point
|
|
|
|
# "int" - Integer
|
|
|
|
# "bool" - Boolean
|
|
|
|
# "str" - String
|
|
|
|
|
|
|
|
# XXX Be sure to update __doc__ if you change any of this!!
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
platform_valid_keys = {
|
2018-10-18 18:25:55 +02:00
|
|
|
"supported_toolchains": {"type": "list", "default": []},
|
|
|
|
"env": {"type": "list", "default": []}
|
|
|
|
}
|
2017-12-05 21:28:44 +01:00
|
|
|
|
|
|
|
testcase_valid_keys = {"tags": {"type": "set", "required": False},
|
|
|
|
"type": {"type": "str", "default": "integration"},
|
|
|
|
"extra_args": {"type": "list"},
|
|
|
|
"extra_configs": {"type": "list"},
|
|
|
|
"build_only": {"type": "bool", "default": False},
|
|
|
|
"build_on_all": {"type": "bool", "default": False},
|
|
|
|
"skip": {"type": "bool", "default": False},
|
|
|
|
"slow": {"type": "bool", "default": False},
|
|
|
|
"timeout": {"type": "int", "default": 60},
|
|
|
|
"min_ram": {"type": "int", "default": 8},
|
|
|
|
"depends_on": {"type": "set"},
|
|
|
|
"min_flash": {"type": "int", "default": 32},
|
|
|
|
"arch_whitelist": {"type": "set"},
|
|
|
|
"arch_exclude": {"type": "set"},
|
|
|
|
"extra_sections": {"type": "list", "default": []},
|
|
|
|
"platform_exclude": {"type": "set"},
|
|
|
|
"platform_whitelist": {"type": "set"},
|
|
|
|
"toolchain_exclude": {"type": "set"},
|
|
|
|
"toolchain_whitelist": {"type": "set"},
|
2017-12-08 16:17:57 +01:00
|
|
|
"filter": {"type": "str"},
|
2017-12-24 02:20:27 +01:00
|
|
|
"harness": {"type": "str"},
|
2018-09-12 23:28:28 +02:00
|
|
|
"harness_config": {"type": "map", "default": {}}
|
2017-12-08 16:17:57 +01:00
|
|
|
}
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
class SanityConfigParser:
|
2017-04-05 00:47:49 +02:00
|
|
|
"""Class to read test case files with semantic checking
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2017-07-24 19:24:35 +02:00
|
|
|
def __init__(self, filename, schema):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Instantiate a new SanityConfigParser object
|
|
|
|
|
2017-04-05 00:47:49 +02:00
|
|
|
@param filename Source .yaml file to read
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
2019-06-22 17:04:10 +02:00
|
|
|
self.data = {}
|
|
|
|
self.schema = schema
|
2015-07-17 21:03:52 +02:00
|
|
|
self.filename = filename
|
2017-12-05 21:08:26 +01:00
|
|
|
self.tests = {}
|
|
|
|
self.common = {}
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
def load(self):
|
|
|
|
self.data = scl.yaml_load_verify(self.filename, self.schema)
|
|
|
|
|
2017-12-05 21:08:26 +01:00
|
|
|
if 'tests' in self.data:
|
|
|
|
self.tests = self.data['tests']
|
|
|
|
if 'common' in self.data:
|
|
|
|
self.common = self.data['common']
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def _cast_value(self, value, typestr):
|
2017-12-05 21:28:44 +01:00
|
|
|
if isinstance(value, str):
|
2017-04-05 00:47:49 +02:00
|
|
|
v = value.strip()
|
2015-07-17 21:03:52 +02:00
|
|
|
if typestr == "str":
|
|
|
|
return v
|
|
|
|
|
|
|
|
elif typestr == "float":
|
2017-04-05 00:47:49 +02:00
|
|
|
return float(value)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
elif typestr == "int":
|
2017-04-05 00:47:49 +02:00
|
|
|
return int(value)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
elif typestr == "bool":
|
2017-04-05 00:47:49 +02:00
|
|
|
return value
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
elif typestr.startswith("list") and isinstance(value, list):
|
2017-10-04 22:14:27 +02:00
|
|
|
return value
|
2017-12-05 21:28:44 +01:00
|
|
|
elif typestr.startswith("list") and isinstance(value, str):
|
2015-07-17 21:03:52 +02:00
|
|
|
vs = v.split()
|
|
|
|
if len(typestr) > 4 and typestr[4] == ":":
|
|
|
|
return [self._cast_value(vsi, typestr[5:]) for vsi in vs]
|
|
|
|
else:
|
|
|
|
return vs
|
|
|
|
|
|
|
|
elif typestr.startswith("set"):
|
|
|
|
vs = v.split()
|
|
|
|
if len(typestr) > 3 and typestr[3] == ":":
|
2019-06-22 17:04:10 +02:00
|
|
|
return {self._cast_value(vsi, typestr[4:]) for vsi in vs}
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
|
|
|
return set(vs)
|
|
|
|
|
2017-12-24 02:20:27 +01:00
|
|
|
elif typestr.startswith("map"):
|
|
|
|
return value
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
2017-12-05 21:28:44 +01:00
|
|
|
raise ConfigurationError(
|
|
|
|
self.filename, "unknown type '%s'" % value)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 23:27:58 +01:00
|
|
|
def get_test(self, name, valid_keys):
|
|
|
|
"""Get a dictionary representing the keys/values within a test
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-05 23:27:58 +01:00
|
|
|
@param name The test in the .yaml file to retrieve data from
|
2015-07-17 21:03:52 +02:00
|
|
|
@param valid_keys A dictionary representing the intended semantics
|
2017-12-05 23:27:58 +01:00
|
|
|
for this test. Each key in this dictionary is a key that could
|
2017-04-05 00:47:49 +02:00
|
|
|
be specified, if a key is given in the .yaml file which isn't in
|
2015-07-17 21:03:52 +02:00
|
|
|
here, it will generate an error. Each value in this dictionary
|
|
|
|
is another dictionary containing metadata:
|
|
|
|
|
|
|
|
"default" - Default value if not given
|
|
|
|
"type" - Data type to convert the text value to. Simple types
|
|
|
|
supported are "str", "float", "int", "bool" which will get
|
|
|
|
converted to respective Python data types. "set" and "list"
|
|
|
|
may also be specified which will split the value by
|
|
|
|
whitespace (but keep the elements as strings). finally,
|
|
|
|
"list:<type>" and "set:<type>" may be given which will
|
|
|
|
perform a type conversion after splitting the value up.
|
|
|
|
"required" - If true, raise an error if not defined. If false
|
|
|
|
and "default" isn't specified, a type conversion will be
|
|
|
|
done on an empty string
|
2017-12-05 23:27:58 +01:00
|
|
|
@return A dictionary containing the test key-value pairs with
|
2015-07-17 21:03:52 +02:00
|
|
|
type conversion and default values filled in per valid_keys
|
|
|
|
"""
|
|
|
|
|
|
|
|
d = {}
|
2017-12-05 21:08:26 +01:00
|
|
|
for k, v in self.common.items():
|
2017-10-04 22:14:27 +02:00
|
|
|
d[k] = v
|
2018-02-24 15:32:14 +01:00
|
|
|
|
2017-12-05 23:27:58 +01:00
|
|
|
for k, v in self.tests[name].items():
|
2015-07-17 21:03:52 +02:00
|
|
|
if k not in valid_keys:
|
2017-12-05 21:28:44 +01:00
|
|
|
raise ConfigurationError(
|
|
|
|
self.filename,
|
|
|
|
"Unknown config key '%s' in definition for '%s'" %
|
2017-12-05 23:27:58 +01:00
|
|
|
(k, name))
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-10-04 22:14:27 +02:00
|
|
|
if k in d:
|
2017-12-05 21:28:44 +01:00
|
|
|
if isinstance(d[k], str):
|
2019-08-29 12:20:57 +02:00
|
|
|
# By default, we just concatenate string values of keys
|
|
|
|
# which appear both in "common" and per-test sections,
|
|
|
|
# but some keys are handled in adhoc way based on their
|
|
|
|
# semantics.
|
|
|
|
if k == "filter":
|
|
|
|
d[k] = "(%s) and (%s)" % (d[k], v)
|
|
|
|
else:
|
|
|
|
d[k] += " " + v
|
2017-10-04 22:14:27 +02:00
|
|
|
else:
|
|
|
|
d[k] = v
|
2018-02-24 15:32:14 +01:00
|
|
|
|
2016-02-22 22:28:10 +01:00
|
|
|
for k, kinfo in valid_keys.items():
|
2015-07-17 21:03:52 +02:00
|
|
|
if k not in d:
|
|
|
|
if "required" in kinfo:
|
|
|
|
required = kinfo["required"]
|
|
|
|
else:
|
|
|
|
required = False
|
|
|
|
|
|
|
|
if required:
|
2017-12-05 21:28:44 +01:00
|
|
|
raise ConfigurationError(
|
|
|
|
self.filename,
|
2017-12-05 23:27:58 +01:00
|
|
|
"missing required value for '%s' in test '%s'" %
|
|
|
|
(k, name))
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
|
|
|
if "default" in kinfo:
|
|
|
|
default = kinfo["default"]
|
|
|
|
else:
|
|
|
|
default = self._cast_value("", kinfo["type"])
|
|
|
|
d[k] = default
|
|
|
|
else:
|
|
|
|
try:
|
|
|
|
d[k] = self._cast_value(d[k], kinfo["type"])
|
2019-06-22 17:04:10 +02:00
|
|
|
except ValueError:
|
2017-12-05 21:28:44 +01:00
|
|
|
raise ConfigurationError(
|
2017-12-05 23:27:58 +01:00
|
|
|
self.filename, "bad %s value '%s' for key '%s' in name '%s'" %
|
|
|
|
(kinfo["type"], d[k], k, name))
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
return d
|
|
|
|
|
|
|
|
|
|
|
|
class Platform:
|
|
|
|
"""Class representing metadata for a particular platform
|
|
|
|
|
2015-12-13 21:00:31 +01:00
|
|
|
Maps directly to BOARD when building"""
|
2017-07-24 19:24:35 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
platform_schema = scl.yaml_load(os.path.join(ZEPHYR_BASE,
|
|
|
|
"scripts","sanity_chk","platform-schema.yaml"))
|
2017-07-24 19:24:35 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def __init__(self):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Constructor.
|
|
|
|
|
|
|
|
"""
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
self.name = ""
|
|
|
|
self.sanitycheck = True
|
|
|
|
# if no RAM size is specified by the board, take a default of 128K
|
|
|
|
self.ram = 128
|
|
|
|
|
|
|
|
self.ignore_tags = []
|
|
|
|
self.default = False
|
|
|
|
# if no flash size is specified by the board, take a default of 512K
|
|
|
|
self.flash = 512
|
|
|
|
self.supported = set()
|
|
|
|
|
|
|
|
self.arch = ""
|
|
|
|
self.type = "na"
|
|
|
|
self.simulation = "na"
|
|
|
|
self.supported_toolchains = []
|
|
|
|
self.env = []
|
|
|
|
self.env_satisfied = True
|
|
|
|
self.filter_data = dict()
|
|
|
|
|
|
|
|
def load(self, platform_file):
|
|
|
|
scp = SanityConfigParser(platform_file, self.platform_schema)
|
|
|
|
scp.load()
|
2017-12-05 21:08:26 +01:00
|
|
|
data = scp.data
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2017-12-05 21:08:26 +01:00
|
|
|
self.name = data['identifier']
|
2018-07-24 15:14:42 +02:00
|
|
|
self.sanitycheck = data.get("sanitycheck", True)
|
2017-04-05 00:47:49 +02:00
|
|
|
# if no RAM size is specified by the board, take a default of 128K
|
2017-12-05 21:08:26 +01:00
|
|
|
self.ram = data.get("ram", 128)
|
|
|
|
testing = data.get("testing", {})
|
2017-04-05 00:47:49 +02:00
|
|
|
self.ignore_tags = testing.get("ignore_tags", [])
|
|
|
|
self.default = testing.get("default", False)
|
|
|
|
# if no flash size is specified by the board, take a default of 512K
|
2017-12-05 21:08:26 +01:00
|
|
|
self.flash = data.get("flash", 512)
|
2017-07-31 14:47:41 +02:00
|
|
|
self.supported = set()
|
2017-12-05 21:08:26 +01:00
|
|
|
for supp_feature in data.get("supported", []):
|
2017-07-31 14:47:41 +02:00
|
|
|
for item in supp_feature.split(":"):
|
|
|
|
self.supported.add(item)
|
|
|
|
|
2017-12-05 21:08:26 +01:00
|
|
|
self.arch = data['arch']
|
2017-12-26 17:02:46 +01:00
|
|
|
self.type = data.get('type', "na")
|
2018-01-04 20:15:22 +01:00
|
|
|
self.simulation = data.get('simulation', "na")
|
2017-12-05 21:08:26 +01:00
|
|
|
self.supported_toolchains = data.get("toolchain", [])
|
2018-10-18 18:25:55 +02:00
|
|
|
self.env = data.get("env", [])
|
|
|
|
self.env_satisfied = True
|
|
|
|
for env in self.env:
|
2019-06-22 17:04:10 +02:00
|
|
|
if not os.environ.get(env, None):
|
2018-10-18 18:25:55 +02:00
|
|
|
self.env_satisfied = False
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
def __repr__(self):
|
2019-06-22 17:04:10 +02:00
|
|
|
return "<%s on %s>" % (self.name, self.arch)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
class TestCase(object):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""Class representing a test application
|
|
|
|
"""
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
def __init__(self):
|
2015-07-17 21:03:52 +02:00
|
|
|
"""TestCase constructor.
|
|
|
|
|
2017-12-05 23:39:29 +01:00
|
|
|
This gets called by TestSuite as it finds and reads test yaml files.
|
2017-04-05 00:47:49 +02:00
|
|
|
Multiple TestCase instances may be generated from a single testcase.yaml,
|
2017-12-05 23:39:29 +01:00
|
|
|
each one corresponds to an entry within that file.
|
2015-07-17 21:03:52 +02:00
|
|
|
|
|
|
|
We need to have a unique name for every single test case. Since
|
2017-04-05 00:47:49 +02:00
|
|
|
a testcase.yaml can define multiple tests, the canonical name for
|
2015-07-17 21:03:52 +02:00
|
|
|
the test case is <workdir>/<name>.
|
|
|
|
|
2019-04-16 02:58:45 +02:00
|
|
|
@param testcase_root os.path.abspath() of one of the --testcase-root
|
|
|
|
@param workdir Sub-directory of testcase_root where the
|
|
|
|
.yaml test configuration file was found
|
2017-12-05 23:39:29 +01:00
|
|
|
@param name Name of this test case, corresponding to the entry name
|
2015-07-17 21:03:52 +02:00
|
|
|
in the test case configuration file. For many test cases that just
|
|
|
|
define one test, can be anything and is usually "test". This is
|
|
|
|
really only used to distinguish between different cases when
|
2017-04-05 00:47:49 +02:00
|
|
|
the testcase.yaml defines multiple tests
|
2017-12-05 23:39:29 +01:00
|
|
|
@param tc_dict Dictionary with test values for this test case
|
2017-04-05 00:47:49 +02:00
|
|
|
from the testcase.yaml file
|
2015-07-17 21:03:52 +02:00
|
|
|
"""
|
2018-11-20 15:03:17 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.id = ""
|
|
|
|
self.source_dir = ""
|
|
|
|
self.yamlfile = ""
|
2018-04-22 05:26:48 +02:00
|
|
|
self.cases = []
|
2019-06-22 17:04:10 +02:00
|
|
|
self.name = ""
|
|
|
|
|
|
|
|
self.type = None
|
|
|
|
self.tags = None
|
|
|
|
self.extra_args = None
|
|
|
|
self.extra_configs = None
|
|
|
|
self.arch_whitelist = None
|
|
|
|
self.arch_exclude = None
|
|
|
|
self.skip = None
|
|
|
|
self.platform_exclude = None
|
|
|
|
self.platform_whitelist = None
|
|
|
|
self.toolchain_exclude = None
|
|
|
|
self.toolchain_whitelist = None
|
|
|
|
self.tc_filter = None
|
|
|
|
self.timeout = 60
|
|
|
|
self.harness = ""
|
|
|
|
self.harness_config = {}
|
|
|
|
self.build_only = True
|
|
|
|
self.build_on_all = False
|
|
|
|
self.slow = False
|
|
|
|
self.min_ram = None
|
|
|
|
self.depends_on = None
|
|
|
|
self.min_flash = None
|
|
|
|
self.extra_sections = None
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2018-11-20 15:03:17 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
@staticmethod
|
|
|
|
def get_unique(testcase_root, workdir, name):
|
2018-11-20 15:03:17 +01:00
|
|
|
|
2019-04-16 02:58:45 +02:00
|
|
|
canonical_testcase_root = os.path.realpath(testcase_root)
|
2019-06-19 08:11:35 +02:00
|
|
|
if Path(canonical_zephyr_base) in Path(canonical_testcase_root).parents:
|
2019-04-16 02:58:45 +02:00
|
|
|
# This is in ZEPHYR_BASE, so include path in name for uniqueness
|
2018-11-20 15:03:17 +01:00
|
|
|
# FIXME: We should not depend on path of test for unique names.
|
2019-04-16 02:58:45 +02:00
|
|
|
relative_tc_root = os.path.relpath(canonical_testcase_root,
|
|
|
|
start=canonical_zephyr_base)
|
2018-11-20 15:03:17 +01:00
|
|
|
else:
|
2019-04-16 02:58:45 +02:00
|
|
|
relative_tc_root = ""
|
2018-11-20 15:03:17 +01:00
|
|
|
|
2019-04-16 02:58:45 +02:00
|
|
|
# workdir can be "."
|
|
|
|
unique = os.path.normpath(os.path.join(relative_tc_root, workdir, name))
|
2018-11-20 15:03:17 +01:00
|
|
|
return unique
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
@staticmethod
|
|
|
|
def scan_file(inf_name):
|
2018-04-22 05:26:48 +02:00
|
|
|
suite_regex = re.compile(
|
2018-04-26 09:17:01 +02:00
|
|
|
# do not match until end-of-line, otherwise we won't allow
|
|
|
|
# stc_regex below to catch the ones that are declared in the same
|
|
|
|
# line--as we only search starting the end of this match
|
|
|
|
br"^\s*ztest_test_suite\(\s*(?P<suite_name>[a-zA-Z0-9_]+)\s*,",
|
2018-04-22 05:26:48 +02:00
|
|
|
re.MULTILINE)
|
|
|
|
stc_regex = re.compile(
|
2018-04-26 09:17:01 +02:00
|
|
|
br"^\s*" # empy space at the beginning is ok
|
|
|
|
# catch the case where it is declared in the same sentence, e.g:
|
|
|
|
#
|
|
|
|
# ztest_test_suite(mutex_complex, ztest_user_unit_test(TESTNAME));
|
|
|
|
br"(?:ztest_test_suite\([a-zA-Z0-9_]+,\s*)?"
|
|
|
|
# Catch ztest[_user]_unit_test-[_setup_teardown](TESTNAME)
|
2019-11-24 15:22:22 +01:00
|
|
|
br"ztest_(?:1cpu_)?(?:user_)?unit_test(?:_setup_teardown)?"
|
2018-04-26 09:17:01 +02:00
|
|
|
# Consume the argument that becomes the extra testcse
|
|
|
|
br"\(\s*"
|
|
|
|
br"(?P<stc_name>[a-zA-Z0-9_]+)"
|
|
|
|
# _setup_teardown() variant has two extra arguments that we ignore
|
|
|
|
br"(?:\s*,\s*[a-zA-Z0-9_]+\s*,\s*[a-zA-Z0-9_]+)?"
|
|
|
|
br"\s*\)",
|
|
|
|
# We don't check how it finishes; we don't care
|
2018-04-22 05:26:48 +02:00
|
|
|
re.MULTILINE)
|
|
|
|
suite_run_regex = re.compile(
|
|
|
|
br"^\s*ztest_run_test_suite\((?P<suite_name>[a-zA-Z0-9_]+)\)",
|
|
|
|
re.MULTILINE)
|
|
|
|
achtung_regex = re.compile(
|
|
|
|
br"(#ifdef|#endif)",
|
|
|
|
re.MULTILINE)
|
|
|
|
warnings = None
|
|
|
|
|
|
|
|
with open(inf_name) as inf:
|
2019-11-21 17:33:12 +01:00
|
|
|
if os.name == 'nt':
|
|
|
|
mmap_args = {'fileno':inf.fileno(), 'length':0, 'access':mmap.ACCESS_READ}
|
|
|
|
else:
|
|
|
|
mmap_args = {'fileno':inf.fileno(), 'length':0, 'flags':mmap.MAP_PRIVATE, 'prot':mmap.PROT_READ, 'offset':0}
|
|
|
|
|
|
|
|
with contextlib.closing(mmap.mmap(**mmap_args)) as main_c:
|
2019-10-29 11:50:26 +01:00
|
|
|
# contextlib makes pylint think main_c isn't subscriptable
|
|
|
|
# pylint: disable=unsubscriptable-object
|
|
|
|
|
2018-04-22 05:26:48 +02:00
|
|
|
suite_regex_match = suite_regex.search(main_c)
|
|
|
|
if not suite_regex_match:
|
|
|
|
# can't find ztest_test_suite, maybe a client, because
|
|
|
|
# it includes ztest.h
|
|
|
|
return None, None
|
|
|
|
|
|
|
|
suite_run_match = suite_run_regex.search(main_c)
|
|
|
|
if not suite_run_match:
|
|
|
|
raise ValueError("can't find ztest_run_test_suite")
|
|
|
|
|
|
|
|
achtung_matches = re.findall(
|
|
|
|
achtung_regex,
|
|
|
|
main_c[suite_regex_match.end():suite_run_match.start()])
|
|
|
|
if achtung_matches:
|
|
|
|
warnings = "found invalid %s in ztest_test_suite()" \
|
2019-06-22 17:04:10 +02:00
|
|
|
% ", ".join({match.decode() for match in achtung_matches})
|
2018-04-26 09:17:01 +02:00
|
|
|
_matches = re.findall(
|
2018-04-22 05:26:48 +02:00
|
|
|
stc_regex,
|
|
|
|
main_c[suite_regex_match.end():suite_run_match.start()])
|
2018-04-26 09:17:01 +02:00
|
|
|
matches = [ match.decode().replace("test_", "") for match in _matches ]
|
2018-04-22 05:26:48 +02:00
|
|
|
return matches, warnings
|
|
|
|
|
|
|
|
def scan_path(self, path):
|
|
|
|
subcases = []
|
|
|
|
for filename in glob.glob(os.path.join(path, "src", "*.c")):
|
|
|
|
try:
|
|
|
|
_subcases, warnings = self.scan_file(filename)
|
|
|
|
if warnings:
|
2018-04-26 09:17:01 +02:00
|
|
|
error("%s: %s" % (filename, warnings))
|
2018-04-22 05:26:48 +02:00
|
|
|
if _subcases:
|
|
|
|
subcases += _subcases
|
|
|
|
except ValueError as e:
|
2019-04-20 07:24:09 +02:00
|
|
|
error("%s: can't find: %s" % (filename, e))
|
2019-12-01 19:55:11 +01:00
|
|
|
for filename in glob.glob(os.path.join(path, "*.c")):
|
|
|
|
try:
|
|
|
|
_subcases, warnings = self.scan_file(filename)
|
|
|
|
if warnings:
|
|
|
|
error("%s: %s" % (filename, warnings))
|
|
|
|
if _subcases:
|
|
|
|
subcases += _subcases
|
|
|
|
except ValueError as e:
|
|
|
|
error("%s: can't find: %s" % (filename, e))
|
2018-04-22 05:26:48 +02:00
|
|
|
return subcases
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def parse_subcases(self, test_path):
|
2019-11-24 15:22:22 +01:00
|
|
|
results = self.scan_path(test_path)
|
2018-04-22 05:26:48 +02:00
|
|
|
for sub in results:
|
2018-04-26 09:17:01 +02:00
|
|
|
name = "{}.{}".format(self.id, sub)
|
2018-04-22 05:26:48 +02:00
|
|
|
self.cases.append(name)
|
|
|
|
|
2019-03-31 22:58:12 +02:00
|
|
|
if not results:
|
|
|
|
self.cases.append(self.id)
|
|
|
|
|
2018-04-22 05:26:48 +02:00
|
|
|
|
2018-02-24 15:32:14 +01:00
|
|
|
def __str__(self):
|
2015-07-17 21:03:52 +02:00
|
|
|
return self.name
|
|
|
|
|
|
|
|
|
|
|
|
class TestInstance:
|
|
|
|
"""Class representing the execution of a particular TestCase on a platform
|
|
|
|
|
|
|
|
@param test The TestCase object we want to build/execute
|
|
|
|
@param platform Platform object that we want to build and run against
|
|
|
|
@param base_outdir Base directory for all test results. The actual
|
|
|
|
out directory used is <outdir>/<platform>/<test case name>
|
|
|
|
"""
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def __init__(self, testcase, platform, base_outdir):
|
|
|
|
|
|
|
|
self.testcase = testcase
|
2015-07-17 21:03:52 +02:00
|
|
|
self.platform = platform
|
2018-11-14 14:46:49 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.status = None
|
|
|
|
self.reason = "N/A"
|
|
|
|
self.metrics = dict()
|
|
|
|
self.handler = None
|
|
|
|
|
|
|
|
|
|
|
|
self.name = os.path.join(platform.name, testcase.name)
|
|
|
|
self.build_dir = os.path.join(base_outdir, platform.name, testcase.name)
|
|
|
|
|
|
|
|
self.build_only = self.check_build_or_run()
|
|
|
|
self.run = not self.build_only
|
|
|
|
|
2018-02-16 03:07:24 +01:00
|
|
|
self.results = {}
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-04-05 23:14:21 +02:00
|
|
|
def __lt__(self, other):
|
|
|
|
return self.name < other.name
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def check_build_or_run(self):
|
2019-11-21 17:33:12 +01:00
|
|
|
# right now we only support building on windows. running is still work
|
|
|
|
# in progress.
|
|
|
|
|
|
|
|
if os.name == 'nt':
|
|
|
|
return True
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
build_only = True
|
|
|
|
|
|
|
|
# we asked for build-only on the command line
|
|
|
|
if options.build_only:
|
|
|
|
return True
|
|
|
|
|
|
|
|
# The testcase is designed to be build only.
|
|
|
|
if self.testcase.build_only:
|
|
|
|
return True
|
|
|
|
|
|
|
|
# Do not run slow tests:
|
|
|
|
skip_slow = self.testcase.slow and not options.enable_slow
|
|
|
|
if skip_slow:
|
|
|
|
return True
|
|
|
|
|
|
|
|
runnable =bool(self.testcase.type == "unit" or \
|
|
|
|
self.platform.type == "native" or \
|
|
|
|
self.platform.simulation in ["nsim", "renode", "qemu"] or \
|
|
|
|
options.device_testing)
|
|
|
|
|
|
|
|
if self.platform.simulation == "nsim":
|
|
|
|
if not find_executable("nsimdrv"):
|
|
|
|
runnable = False
|
|
|
|
|
|
|
|
if self.platform.simulation == "renode":
|
|
|
|
if not find_executable("renode"):
|
|
|
|
runnable = False
|
|
|
|
|
|
|
|
# console harness allows us to run the test and capture data.
|
|
|
|
if self.testcase.harness == 'console':
|
|
|
|
|
|
|
|
# if we have a fixture that is also being supplied on the
|
|
|
|
# command-line, then we need to run the test, not just build it.
|
|
|
|
if "fixture" in self.testcase.harness_config:
|
|
|
|
fixture = self.testcase.harness_config['fixture']
|
|
|
|
if fixture in options.fixture:
|
|
|
|
build_only = False
|
|
|
|
else:
|
2018-11-14 14:46:49 +01:00
|
|
|
build_only = True
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
build_only = False
|
|
|
|
elif self.testcase.harness:
|
2018-11-14 14:46:49 +01:00
|
|
|
build_only = True
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
build_only = False
|
2018-11-14 14:46:49 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
return not (not build_only and runnable)
|
2018-11-14 14:46:49 +01:00
|
|
|
|
2018-11-24 02:24:19 +01:00
|
|
|
def create_overlay(self, platform):
|
2019-07-07 00:52:31 +02:00
|
|
|
# Create this in a "sanitycheck/" subdirectory otherwise this
|
|
|
|
# will pass this overlay to kconfig.py *twice* and kconfig.cmake
|
|
|
|
# will silently give that second time precedence over any
|
|
|
|
# --extra-args=CONFIG_*
|
2019-06-22 17:04:10 +02:00
|
|
|
subdir = os.path.join(self.build_dir, "sanitycheck")
|
2019-07-07 00:52:31 +02:00
|
|
|
os.makedirs(subdir, exist_ok=True)
|
|
|
|
file = os.path.join(subdir, "testcase_extra.conf")
|
2019-03-02 21:43:23 +01:00
|
|
|
with open(file, "w") as f:
|
|
|
|
content = ""
|
2018-11-08 05:50:54 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if self.testcase.extra_configs:
|
|
|
|
content = "\n".join(self.testcase.extra_configs)
|
2018-11-08 05:50:54 +01:00
|
|
|
|
2019-03-02 21:43:23 +01:00
|
|
|
if options.enable_coverage:
|
2019-09-12 00:03:35 +02:00
|
|
|
if platform.name in options.coverage_platform:
|
2019-03-02 21:43:23 +01:00
|
|
|
content = content + "\nCONFIG_COVERAGE=y"
|
|
|
|
|
2019-09-12 00:03:35 +02:00
|
|
|
if options.enable_asan:
|
|
|
|
if platform.type == "native":
|
|
|
|
content = content + "\nCONFIG_ASAN=y"
|
|
|
|
|
2019-03-02 21:43:23 +01:00
|
|
|
f.write(content)
|
2017-10-04 22:14:27 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def calculate_sizes(self):
|
|
|
|
"""Get the RAM/ROM sizes of a test case.
|
|
|
|
|
|
|
|
This can only be run after the instance has been executed by
|
|
|
|
MakeGenerator, otherwise there won't be any binaries to measure.
|
|
|
|
|
|
|
|
@return A SizeCalculator object
|
|
|
|
"""
|
2019-06-22 17:04:10 +02:00
|
|
|
fns = glob.glob(os.path.join(self.build_dir, "zephyr", "*.elf"))
|
|
|
|
fns.extend(glob.glob(os.path.join(self.build_dir, "zephyr", "*.exe")))
|
2016-10-28 00:10:08 +02:00
|
|
|
fns = [x for x in fns if not x.endswith('_prebuilt.elf')]
|
2019-06-22 17:04:10 +02:00
|
|
|
if len(fns) != 1:
|
2015-10-02 19:04:56 +02:00
|
|
|
raise BuildError("Missing/multiple output ELF binary")
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
return SizeCalculator(fns[0], self.testcase.extra_sections)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def __repr__(self):
|
|
|
|
return "<TestCase %s on %s>" % (self.testcase.name, self.platform.name)
|
2015-08-28 21:36:03 +02:00
|
|
|
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
class CMake():
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-02-10 18:43:07 +01:00
|
|
|
config_re = re.compile('(CONFIG_[A-Za-z0-9_]+)[=]\"?([^\"]*)\"?$')
|
2018-12-03 01:12:21 +01:00
|
|
|
dt_re = re.compile('([A-Za-z0-9_]+)[=]\"?([^\"]*)\"?$')
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def __init__(self, testcase, platform, source_dir, build_dir):
|
2017-07-24 19:24:35 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.cwd = None
|
|
|
|
self.capture_output = True
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.defconfig = {}
|
|
|
|
self.cmake_cache = {}
|
2016-04-08 20:52:13 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.instance = None
|
|
|
|
self.testcase = testcase
|
|
|
|
self.platform = platform
|
|
|
|
self.source_dir = source_dir
|
|
|
|
self.build_dir = build_dir
|
|
|
|
self.log = "build.log"
|
2018-04-08 20:30:16 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def parse_generated(self):
|
|
|
|
self.defconfig = {}
|
|
|
|
return {}
|
2017-07-24 19:24:35 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def run_build(self, args=[]):
|
scripts: sanitycheck: If error happens accessing YAML data, go on
Extend exception handling to cover not just YAML loading, but any
error while accessing parsed data too. That may catch e.g. schema
mismatch errors (for folks who don't have pykwalify installed, which
is optional). So, now error will be logged, but processing of other
tests will continue.
For example, I had a local, uncommitted test which wasn't converted
per 23f81eeb42668e and caused:
Traceback (most recent call last):
File "./scripts/sanitycheck", line 2456, in <module>
main()
File "./scripts/sanitycheck", line 2324, in main
options.outdir, options.coverage)
File "./scripts/sanitycheck", line 1445, in __init__
for name in parsed_data.tests.keys():
AttributeError: 'list' object has no attribute 'keys'
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-01-03 16:19:43 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
verbose("Building %s for %s" % (self.source_dir, self.platform.name))
|
2016-04-08 20:52:13 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
cmake_args = []
|
|
|
|
cmake_args.extend(args)
|
|
|
|
cmake = shutil.which('cmake')
|
|
|
|
cmd = [cmake] + cmake_args
|
|
|
|
kwargs = dict()
|
2017-10-09 19:19:12 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if self.capture_output:
|
|
|
|
kwargs['stdout'] = subprocess.PIPE
|
|
|
|
# CMake sends the output of message() to stderr unless it's STATUS
|
|
|
|
kwargs['stderr'] = subprocess.STDOUT
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if self.cwd:
|
|
|
|
kwargs['cwd'] = self.cwd
|
2017-10-09 19:42:28 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
p = subprocess.Popen(cmd, **kwargs)
|
|
|
|
out, _ = p.communicate()
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
results = {}
|
|
|
|
if p.returncode == 0:
|
|
|
|
msg = "Finished building %s for %s" %(self.source_dir, self.platform.name)
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
self.instance.status = "passed"
|
|
|
|
results = {'msg': msg, "returncode": p.returncode, "instance": self.instance}
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if out:
|
|
|
|
log_msg = out.decode(sys.getdefaultencoding())
|
|
|
|
with open(os.path.join(self.build_dir, self.log), "a") as log:
|
|
|
|
log.write(log_msg)
|
2018-08-16 00:12:28 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
return None
|
|
|
|
else:
|
|
|
|
# A real error occurred, raise an exception
|
|
|
|
if out:
|
|
|
|
log_msg = out.decode(sys.getdefaultencoding())
|
|
|
|
with open(os.path.join(self.build_dir, self.log), "a") as log:
|
|
|
|
log.write(log_msg)
|
|
|
|
|
|
|
|
overflow_flash = "region `FLASH' overflowed by"
|
|
|
|
overflow_ram = "region `RAM' overflowed by"
|
|
|
|
|
|
|
|
if log_msg:
|
|
|
|
if log_msg.find(overflow_flash) > 0 or log_msg.find(overflow_ram) > 0:
|
|
|
|
verbose("RAM/ROM Overflow")
|
|
|
|
self.instance.status = "skipped"
|
|
|
|
self.instance.reason = "overflow"
|
|
|
|
else:
|
|
|
|
self.instance.status = "failed"
|
|
|
|
self.instance.reason = "Build failure"
|
|
|
|
|
|
|
|
results = {
|
|
|
|
"returncode": p.returncode,
|
|
|
|
"instance": self.instance,
|
|
|
|
}
|
|
|
|
|
|
|
|
return results
|
|
|
|
|
|
|
|
def run_cmake(self, args=[]):
|
|
|
|
|
|
|
|
verbose("Running cmake on %s for %s" %(self.source_dir, self.platform.name))
|
|
|
|
|
2019-10-22 16:36:24 +02:00
|
|
|
ldflags="-Wl,--fatal-warnings"
|
|
|
|
|
|
|
|
#fixme: add additional cflags based on options
|
|
|
|
cmake_args = [
|
|
|
|
'-B{}'.format(self.build_dir),
|
|
|
|
'-S{}'.format(self.source_dir),
|
|
|
|
'-DEXTRA_CFLAGS="-Werror ',
|
|
|
|
'-DEXTRA_AFLAGS=-Wa,--fatal-warnings',
|
|
|
|
'-DEXTRA_LDFLAGS="{}'.format(ldflags),
|
|
|
|
'-G{}'.format(get_generator()[1])
|
|
|
|
]
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2019-11-30 16:15:23 +01:00
|
|
|
if options.cmake_only:
|
|
|
|
cmake_args.append("-DCMAKE_EXPORT_COMPILE_COMMANDS=1")
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
args = ["-D{}".format(a.replace('"', '')) for a in args]
|
|
|
|
cmake_args.extend(args)
|
|
|
|
|
|
|
|
cmake_opts = ['-DBOARD={}'.format(self.platform.name)]
|
|
|
|
cmake_args.extend(cmake_opts)
|
|
|
|
|
|
|
|
cmake = shutil.which('cmake')
|
|
|
|
cmd = [cmake] + cmake_args
|
|
|
|
kwargs = dict()
|
|
|
|
|
|
|
|
if self.capture_output:
|
|
|
|
kwargs['stdout'] = subprocess.PIPE
|
|
|
|
# CMake sends the output of message() to stderr unless it's STATUS
|
|
|
|
kwargs['stderr'] = subprocess.STDOUT
|
|
|
|
|
|
|
|
if self.cwd:
|
|
|
|
kwargs['cwd'] = self.cwd
|
|
|
|
|
|
|
|
p = subprocess.Popen(cmd, **kwargs)
|
|
|
|
out, _ = p.communicate()
|
|
|
|
|
|
|
|
if p.returncode == 0:
|
|
|
|
filter_results = self.parse_generated()
|
|
|
|
msg = "Finished building %s for %s" %(self.source_dir, self.platform.name)
|
|
|
|
|
|
|
|
results = {'msg': msg, 'filter': filter_results}
|
|
|
|
|
|
|
|
else:
|
|
|
|
self.instance.status = "failed"
|
|
|
|
self.instance.reason = "Cmake build failure"
|
|
|
|
results = {"returncode": p.returncode}
|
|
|
|
|
|
|
|
|
|
|
|
if out:
|
|
|
|
with open(os.path.join(self.build_dir, self.log), "a") as log:
|
|
|
|
log_msg = out.decode(sys.getdefaultencoding())
|
|
|
|
log.write(log_msg)
|
|
|
|
|
|
|
|
return results
|
|
|
|
|
|
|
|
|
|
|
|
class FilterBuilder(CMake):
|
|
|
|
|
|
|
|
def __init__(self, testcase, platform, source_dir, build_dir):
|
|
|
|
super().__init__(testcase, platform, source_dir, build_dir)
|
|
|
|
|
|
|
|
self.log = "config-sanitycheck.log"
|
|
|
|
|
|
|
|
def parse_generated(self):
|
|
|
|
|
|
|
|
if self.platform.name == "unit_testing":
|
|
|
|
return {}
|
|
|
|
|
|
|
|
cmake_cache_path = os.path.join(self.build_dir, "CMakeCache.txt")
|
|
|
|
defconfig_path = os.path.join(self.build_dir, "zephyr", ".config")
|
|
|
|
|
|
|
|
with open(defconfig_path, "r") as fp:
|
|
|
|
defconfig = {}
|
|
|
|
for line in fp.readlines():
|
|
|
|
m = self.config_re.match(line)
|
|
|
|
if not m:
|
|
|
|
if line.strip() and not line.startswith("#"):
|
|
|
|
sys.stderr.write("Unrecognized line %s\n" % line)
|
|
|
|
continue
|
|
|
|
defconfig[m.group(1)] = m.group(2).strip()
|
|
|
|
|
|
|
|
self.defconfig = defconfig
|
|
|
|
|
|
|
|
cmake_conf = {}
|
|
|
|
try:
|
|
|
|
cache = CMakeCache.from_file(cmake_cache_path)
|
|
|
|
except FileNotFoundError:
|
|
|
|
cache = {}
|
|
|
|
|
|
|
|
for k in iter(cache):
|
|
|
|
cmake_conf[k.name] = k.value
|
|
|
|
|
|
|
|
self.cmake_cache = cmake_conf
|
|
|
|
|
|
|
|
filter_data = {
|
|
|
|
"ARCH": self.platform.arch,
|
|
|
|
"PLATFORM": self.platform.name
|
|
|
|
}
|
|
|
|
filter_data.update(os.environ)
|
|
|
|
filter_data.update(self.defconfig)
|
|
|
|
filter_data.update(self.cmake_cache)
|
|
|
|
|
2019-11-06 00:36:15 +01:00
|
|
|
dts_path = os.path.join(self.build_dir, "zephyr", self.platform.name + ".dts.pre.tmp")
|
2019-11-06 11:55:24 +01:00
|
|
|
if self.testcase and self.testcase.tc_filter:
|
2019-06-22 17:04:10 +02:00
|
|
|
try:
|
2019-11-06 11:55:24 +01:00
|
|
|
if os.path.exists(dts_path):
|
|
|
|
edt = edtlib.EDT(dts_path, [os.path.join(ZEPHYR_BASE, "dts", "bindings")])
|
|
|
|
else:
|
|
|
|
edt = None
|
2019-09-13 00:08:43 +02:00
|
|
|
res = expr_parser.parse(self.testcase.tc_filter, filter_data, edt)
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
except (ValueError, SyntaxError) as se:
|
|
|
|
sys.stderr.write(
|
|
|
|
"Failed processing %s\n" % self.testcase.yamlfile)
|
|
|
|
raise se
|
|
|
|
|
|
|
|
if not res:
|
|
|
|
return {os.path.join(self.platform.name, self.testcase.name): True}
|
|
|
|
else:
|
|
|
|
return {os.path.join(self.platform.name, self.testcase.name): False}
|
|
|
|
else:
|
|
|
|
self.platform.filter_data = filter_data
|
|
|
|
return filter_data
|
|
|
|
|
|
|
|
|
|
|
|
class ProjectBuilder(FilterBuilder):
|
|
|
|
|
|
|
|
def __init__(self, suite, instance):
|
|
|
|
super().__init__(instance.testcase, instance.platform, instance.testcase.source_dir, instance.build_dir)
|
|
|
|
|
|
|
|
self.log = "build.log"
|
|
|
|
self.instance = instance
|
|
|
|
self.suite = suite
|
|
|
|
|
|
|
|
def setup_handler(self):
|
|
|
|
|
|
|
|
instance = self.instance
|
|
|
|
args = []
|
|
|
|
|
|
|
|
# FIXME: Needs simplification
|
|
|
|
if instance.platform.simulation == "qemu":
|
|
|
|
instance.handler = QEMUHandler(instance, "qemu")
|
|
|
|
args.append("QEMU_PIPE=%s" % instance.handler.get_fifo())
|
|
|
|
instance.handler.call_make_run = True
|
|
|
|
elif instance.testcase.type == "unit":
|
|
|
|
instance.handler = BinaryHandler(instance, "unit")
|
|
|
|
instance.handler.binary = os.path.join(instance.build_dir, "testbinary")
|
|
|
|
elif instance.platform.type == "native":
|
|
|
|
instance.handler = BinaryHandler(instance, "native")
|
|
|
|
instance.handler.binary = os.path.join(instance.build_dir, "zephyr", "zephyr.exe")
|
|
|
|
elif instance.platform.simulation == "nsim":
|
|
|
|
if find_executable("nsimdrv"):
|
|
|
|
instance.handler = BinaryHandler(instance, "nsim")
|
|
|
|
instance.handler.call_make_run = True
|
|
|
|
elif instance.platform.simulation == "renode":
|
|
|
|
if find_executable("renode"):
|
|
|
|
instance.handler = BinaryHandler(instance, "renode")
|
|
|
|
instance.handler.pid_fn = os.path.join(instance.build_dir, "renode.pid")
|
|
|
|
instance.handler.call_make_run = True
|
|
|
|
elif options.device_testing:
|
|
|
|
instance.handler = DeviceHandler(instance, "device")
|
|
|
|
|
|
|
|
if instance.handler:
|
|
|
|
instance.handler.args = args
|
|
|
|
|
|
|
|
def process(self, message):
|
|
|
|
op = message.get('op')
|
|
|
|
|
|
|
|
if not self.instance.handler:
|
|
|
|
self.setup_handler()
|
|
|
|
|
|
|
|
# The build process, call cmake and build with configured generator
|
|
|
|
if op == "cmake":
|
|
|
|
results = self.cmake()
|
|
|
|
if self.instance.status == "failed":
|
|
|
|
pipeline.put({"op": "report", "test": self.instance})
|
|
|
|
elif options.cmake_only:
|
|
|
|
pipeline.put({"op": "report", "test": self.instance})
|
|
|
|
else:
|
|
|
|
if self.instance.name in results['filter'] and results['filter'][self.instance.name]:
|
|
|
|
verbose("filtering %s" % self.instance.name)
|
|
|
|
self.instance.status = "skipped"
|
|
|
|
self.instance.reason = "filter"
|
|
|
|
pipeline.put({"op": "report", "test": self.instance})
|
|
|
|
else:
|
|
|
|
pipeline.put({"op": "build", "test": self.instance})
|
|
|
|
|
|
|
|
|
|
|
|
elif op == "build":
|
|
|
|
verbose("build test: %s" %self.instance.name)
|
|
|
|
results = self.build()
|
|
|
|
|
|
|
|
if results.get('returncode', 1) > 0:
|
|
|
|
pipeline.put({"op": "report", "test": self.instance})
|
|
|
|
else:
|
|
|
|
if self.instance.run:
|
|
|
|
pipeline.put({"op": "run", "test": self.instance})
|
|
|
|
else:
|
|
|
|
pipeline.put({"op": "report", "test": self.instance})
|
|
|
|
# Run the generated binary using one of the supported handlers
|
|
|
|
elif op == "run":
|
|
|
|
verbose("run test: %s" %self.instance.name)
|
|
|
|
self.run()
|
|
|
|
self.instance.status, _ = self.instance.handler.get_state()
|
|
|
|
pipeline.put({
|
|
|
|
"op": "report",
|
|
|
|
"test": self.instance,
|
|
|
|
"state": "executed",
|
|
|
|
"status": self.instance.status,
|
2019-11-11 12:31:03 +01:00
|
|
|
"reason": self.instance.reason}
|
2019-06-22 17:04:10 +02:00
|
|
|
)
|
|
|
|
|
|
|
|
# Report results and output progress to screen
|
|
|
|
elif op == "report":
|
2019-10-11 16:32:45 +02:00
|
|
|
with report_lock:
|
|
|
|
self.report_out()
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def report_out(self):
|
|
|
|
total_tests_width = len(str(self.suite.total_tests))
|
|
|
|
self.suite.total_done += 1
|
|
|
|
instance = self.instance
|
|
|
|
|
|
|
|
if instance.status in ["failed", "timeout"]:
|
|
|
|
self.suite.total_failed += 1
|
|
|
|
if VERBOSE or not TERMINAL:
|
|
|
|
status = COLOR_RED + "FAILED " + COLOR_NORMAL + instance.reason
|
|
|
|
else:
|
|
|
|
info(
|
2019-10-11 16:32:45 +02:00
|
|
|
"\n{:<25} {:<50} {}FAILED{}: {}".format(
|
2019-06-22 17:04:10 +02:00
|
|
|
instance.platform.name,
|
|
|
|
instance.testcase.name,
|
|
|
|
COLOR_RED,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
instance.reason), False)
|
2019-10-11 16:32:45 +02:00
|
|
|
if not VERBOSE:
|
2019-10-29 11:54:02 +01:00
|
|
|
log_info_file(instance)
|
2019-06-22 17:04:10 +02:00
|
|
|
elif instance.status == "skipped":
|
|
|
|
self.suite.total_skipped += 1
|
|
|
|
status = COLOR_YELLOW + "SKIPPED" + COLOR_NORMAL
|
|
|
|
else:
|
|
|
|
status = COLOR_GREEN + "PASSED" + COLOR_NORMAL
|
|
|
|
|
|
|
|
if VERBOSE or not TERMINAL:
|
|
|
|
if options.cmake_only:
|
|
|
|
more_info = "cmake"
|
|
|
|
elif instance.status == "skipped":
|
|
|
|
more_info = instance.reason
|
|
|
|
else:
|
|
|
|
if instance.handler and instance.run:
|
|
|
|
more_info = instance.handler.type_str
|
|
|
|
htime = instance.handler.duration
|
|
|
|
if htime:
|
|
|
|
more_info += " {:.3f}s".format(htime)
|
|
|
|
else:
|
|
|
|
more_info = "build"
|
|
|
|
|
|
|
|
info("{:>{}}/{} {:<25} {:<50} {} ({})".format(
|
|
|
|
self.suite.total_done, total_tests_width, self.suite.total_tests, instance.platform.name,
|
|
|
|
instance.testcase.name, status, more_info))
|
|
|
|
|
|
|
|
if instance.status in ["failed", "timeout"]:
|
2019-10-29 11:54:02 +01:00
|
|
|
log_info_file(instance)
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
sys.stdout.write("\rtotal complete: %s%4d/%4d%s %2d%% skipped: %s%4d%s, failed: %s%4d%s" % (
|
|
|
|
COLOR_GREEN,
|
|
|
|
self.suite.total_done,
|
|
|
|
self.suite.total_tests,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
int((float(self.suite.total_done) / self.suite.total_tests) * 100),
|
|
|
|
COLOR_YELLOW if self.suite.total_skipped > 0 else COLOR_NORMAL,
|
|
|
|
self.suite.total_skipped,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
COLOR_RED if self.suite.total_failed > 0 else COLOR_NORMAL,
|
|
|
|
self.suite.total_failed,
|
|
|
|
COLOR_NORMAL
|
|
|
|
)
|
|
|
|
)
|
|
|
|
sys.stdout.flush()
|
|
|
|
|
|
|
|
def cmake(self):
|
|
|
|
|
|
|
|
instance = self.instance
|
|
|
|
args = self.testcase.extra_args[:]
|
|
|
|
|
|
|
|
if options.extra_args:
|
|
|
|
args += options.extra_args
|
|
|
|
|
|
|
|
if instance.handler:
|
|
|
|
args += instance.handler.args
|
|
|
|
|
|
|
|
# merge overlay files into one variable
|
|
|
|
overlays = ""
|
|
|
|
idx = 0
|
|
|
|
for arg in args:
|
|
|
|
match = re.search('OVERLAY_CONFIG="(.*)"', arg)
|
|
|
|
if match:
|
|
|
|
overlays += match.group(1)
|
|
|
|
del args[idx]
|
|
|
|
idx += 1
|
|
|
|
|
2019-09-12 00:03:35 +02:00
|
|
|
if (self.testcase.extra_configs or options.coverage or
|
|
|
|
options.enable_asan):
|
2019-06-22 17:04:10 +02:00
|
|
|
args.append("OVERLAY_CONFIG=\"%s %s\"" %(overlays,
|
|
|
|
os.path.join(instance.build_dir,
|
|
|
|
"sanitycheck", "testcase_extra.conf")))
|
|
|
|
|
|
|
|
results = self.run_cmake(args)
|
|
|
|
return results
|
|
|
|
|
|
|
|
def build(self):
|
|
|
|
results = self.run_build(['--build', self.build_dir])
|
|
|
|
return results
|
|
|
|
|
|
|
|
def run(self):
|
|
|
|
|
|
|
|
instance = self.instance
|
|
|
|
|
|
|
|
if instance.handler.type_str == "device":
|
|
|
|
instance.handler.suite = self.suite
|
|
|
|
|
|
|
|
instance.handler.handle()
|
|
|
|
|
|
|
|
sys.stdout.flush()
|
|
|
|
|
|
|
|
|
|
|
|
pipeline = queue.LifoQueue()
|
|
|
|
|
|
|
|
class BoundedExecutor(concurrent.futures.ThreadPoolExecutor):
|
|
|
|
"""BoundedExecutor behaves as a ThreadPoolExecutor which will block on
|
|
|
|
calls to submit() once the limit given as "bound" work items are queued for
|
|
|
|
execution.
|
|
|
|
:param bound: Integer - the maximum number of items in the work queue
|
|
|
|
:param max_workers: Integer - the size of the thread pool
|
|
|
|
"""
|
|
|
|
def __init__(self, bound, max_workers, **kwargs):
|
|
|
|
super().__init__(max_workers)
|
|
|
|
#self.executor = ThreadPoolExecutor(max_workers=max_workers)
|
|
|
|
self.semaphore = BoundedSemaphore(bound + max_workers)
|
|
|
|
|
|
|
|
def submit(self, fn, *args, **kwargs):
|
|
|
|
self.semaphore.acquire()
|
|
|
|
try:
|
|
|
|
future = super().submit(fn, *args, **kwargs)
|
|
|
|
except:
|
|
|
|
self.semaphore.release()
|
|
|
|
raise
|
|
|
|
else:
|
|
|
|
future.add_done_callback(lambda x: self.semaphore.release())
|
|
|
|
return future
|
|
|
|
|
|
|
|
|
|
|
|
class TestSuite:
|
|
|
|
config_re = re.compile('(CONFIG_[A-Za-z0-9_]+)[=]\"?([^\"]*)\"?$')
|
|
|
|
dt_re = re.compile('([A-Za-z0-9_]+)[=]\"?([^\"]*)\"?$')
|
|
|
|
|
|
|
|
tc_schema = scl.yaml_load(
|
|
|
|
os.path.join(ZEPHYR_BASE,
|
|
|
|
"scripts", "sanity_chk", "testcase-schema.yaml"))
|
|
|
|
|
|
|
|
def __init__(self, board_root_list, testcase_roots, outdir):
|
|
|
|
|
|
|
|
self.roots = testcase_roots
|
|
|
|
if not isinstance(board_root_list, list):
|
|
|
|
self.board_roots= [board_root_list]
|
|
|
|
else:
|
|
|
|
self.board_roots = board_root_list
|
|
|
|
|
|
|
|
# Keep track of which test cases we've filtered out and why
|
|
|
|
self.testcases = {}
|
|
|
|
self.platforms = []
|
|
|
|
self.default_platforms = []
|
|
|
|
self.outdir = os.path.abspath(outdir)
|
|
|
|
self.discards = None
|
|
|
|
self.load_errors = 0
|
|
|
|
self.instances = dict()
|
|
|
|
|
|
|
|
self.total_tests = 0 # number of test instances
|
|
|
|
self.total_cases = 0 # number of test cases
|
|
|
|
self.total_done = 0 # tests completed
|
|
|
|
self.total_failed = 0
|
|
|
|
self.total_skipped = 0
|
|
|
|
|
|
|
|
self.total_platforms = 0
|
|
|
|
self.start_time = 0
|
|
|
|
self.duration = 0
|
|
|
|
self.warnings = 0
|
|
|
|
self.cv = threading.Condition()
|
|
|
|
|
|
|
|
# hardcoded for now
|
|
|
|
self.connected_hardware = []
|
|
|
|
|
|
|
|
|
|
|
|
if options.jobs:
|
|
|
|
self.jobs = options.jobs
|
|
|
|
elif options.build_only:
|
|
|
|
self.jobs = multiprocessing.cpu_count() * 2
|
|
|
|
else:
|
|
|
|
self.jobs = multiprocessing.cpu_count()
|
|
|
|
|
|
|
|
info("JOBS: %d" % self.jobs)
|
|
|
|
|
|
|
|
def update(self):
|
|
|
|
self.total_tests = len(self.instances)
|
|
|
|
self.total_cases = len(self.testcases)
|
|
|
|
|
|
|
|
|
|
|
|
def compare_metrics(self, filename):
|
|
|
|
# name, datatype, lower results better
|
|
|
|
interesting_metrics = [("ram_size", int, True),
|
|
|
|
("rom_size", int, True)]
|
|
|
|
|
|
|
|
|
|
|
|
if not os.path.exists(filename):
|
|
|
|
info("Cannot compare metrics, %s not found" % filename)
|
|
|
|
return []
|
|
|
|
|
|
|
|
results = []
|
|
|
|
saved_metrics = {}
|
|
|
|
with open(filename) as fp:
|
|
|
|
cr = csv.DictReader(fp)
|
|
|
|
for row in cr:
|
|
|
|
d = {}
|
|
|
|
for m, _, _ in interesting_metrics:
|
|
|
|
d[m] = row[m]
|
|
|
|
saved_metrics[(row["test"], row["platform"])] = d
|
|
|
|
|
|
|
|
for instance in self.instances.values():
|
|
|
|
mkey = (instance.testcase.name, instance.platform.name)
|
|
|
|
if mkey not in saved_metrics:
|
|
|
|
continue
|
|
|
|
sm = saved_metrics[mkey]
|
|
|
|
for metric, mtype, lower_better in interesting_metrics:
|
|
|
|
if metric not in instance.metrics:
|
|
|
|
continue
|
|
|
|
if sm[metric] == "":
|
|
|
|
continue
|
|
|
|
delta = instance.metrics.get(metric, 0) - mtype(sm[metric])
|
|
|
|
if delta == 0:
|
|
|
|
continue
|
|
|
|
results.append((instance, metric, instance.metrics.get(metric, 0 ), delta,
|
|
|
|
lower_better))
|
|
|
|
return results
|
|
|
|
|
|
|
|
def misc_reports(self, report, show_footprint, all_deltas,
|
|
|
|
footprint_threshold, last_metrics):
|
|
|
|
|
|
|
|
if not report:
|
|
|
|
return
|
|
|
|
|
|
|
|
deltas = self.compare_metrics(report)
|
|
|
|
warnings = 0
|
|
|
|
if deltas and show_footprint:
|
|
|
|
for i, metric, value, delta, lower_better in deltas:
|
|
|
|
if not all_deltas and ((delta < 0 and lower_better) or
|
|
|
|
(delta > 0 and not lower_better)):
|
|
|
|
continue
|
|
|
|
|
|
|
|
percentage = (float(delta) / float(value - delta))
|
|
|
|
if not all_deltas and (percentage <
|
|
|
|
(footprint_threshold / 100.0)):
|
|
|
|
continue
|
|
|
|
|
|
|
|
info("{:<25} {:<60} {}{}{}: {} {:<+4}, is now {:6} {:+.2%}".format(
|
|
|
|
i.platform.name, i.testcase.name, COLOR_YELLOW,
|
|
|
|
"INFO" if all_deltas else "WARNING", COLOR_NORMAL,
|
|
|
|
metric, delta, value, percentage))
|
|
|
|
warnings += 1
|
|
|
|
|
|
|
|
if warnings:
|
|
|
|
info("Deltas based on metrics from last %s" %
|
|
|
|
("release" if not last_metrics else "run"))
|
|
|
|
|
|
|
|
def summary(self, unrecognized_sections):
|
|
|
|
failed = 0
|
|
|
|
for instance in self.instances.values():
|
|
|
|
if instance.status == "failed":
|
|
|
|
failed += 1
|
|
|
|
elif instance.metrics.get("unrecognized") and not unrecognized_sections:
|
|
|
|
info("%sFAILED%s: %s has unrecognized binary sections: %s" %
|
|
|
|
(COLOR_RED, COLOR_NORMAL, instance.name,
|
|
|
|
str(instance.metrics.get("unrecognized", []))))
|
|
|
|
failed += 1
|
|
|
|
|
|
|
|
if self.total_tests and self.total_tests != self.total_skipped:
|
|
|
|
pass_rate = (float(self.total_tests - self.total_failed - self.total_skipped)/ float(self.total_tests - self.total_skipped))
|
|
|
|
else:
|
|
|
|
pass_rate = 0
|
|
|
|
|
|
|
|
info("{}{} of {}{} tests passed ({:.2%}), {}{}{} failed, {} skipped with {}{}{} warnings in {:.2f} seconds".format(
|
|
|
|
COLOR_RED if failed else COLOR_GREEN,
|
|
|
|
self.total_tests - self.total_failed - self.total_skipped,
|
|
|
|
self.total_tests,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
pass_rate,
|
|
|
|
COLOR_RED if self.total_failed else COLOR_NORMAL,
|
|
|
|
self.total_failed,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
self.total_skipped,
|
|
|
|
COLOR_YELLOW if self.warnings else COLOR_NORMAL,
|
|
|
|
self.warnings,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
self.duration))
|
|
|
|
|
|
|
|
platforms = set(p.platform for p in self.instances.values())
|
|
|
|
self.total_platforms = len(self.platforms)
|
|
|
|
if self.platforms:
|
|
|
|
info("In total {} test cases were executed on {} out of total {} platforms ({:02.2f}%)".format(
|
|
|
|
self.total_cases,
|
|
|
|
len(platforms),
|
|
|
|
self.total_platforms,
|
|
|
|
(100 * len(platforms) / len(self.platforms))
|
|
|
|
))
|
|
|
|
|
|
|
|
def save_reports(self):
|
|
|
|
if not self.instances:
|
|
|
|
return
|
|
|
|
|
|
|
|
report_name = "sanitycheck"
|
|
|
|
if options.report_name:
|
|
|
|
report_name = options.report_name
|
|
|
|
|
|
|
|
if options.report_dir:
|
2019-10-15 16:18:39 +02:00
|
|
|
os.makedirs(options.report_dir, exist_ok=True)
|
2019-06-22 17:04:10 +02:00
|
|
|
filename = os.path.join(options.report_dir, report_name)
|
|
|
|
outdir = options.report_dir
|
|
|
|
else:
|
|
|
|
filename = os.path.join(options.outdir, report_name)
|
|
|
|
outdir = options.outdir
|
|
|
|
|
|
|
|
if not options.no_update:
|
|
|
|
self.xunit_report(filename + ".xml")
|
|
|
|
self.csv_report(filename + ".csv")
|
|
|
|
self.target_report(outdir)
|
|
|
|
if self.discards:
|
|
|
|
self.discard_report(filename + "_discard.csv")
|
|
|
|
|
|
|
|
if options.release:
|
|
|
|
self.csv_report(RELEASE_DATA)
|
|
|
|
|
|
|
|
if log_file:
|
|
|
|
log_file.close()
|
|
|
|
|
|
|
|
def load_hardware_map_from_cmdline(self, serial, platform):
|
|
|
|
device = {
|
|
|
|
"serial": serial,
|
|
|
|
"platform": platform,
|
|
|
|
"counter": 0,
|
|
|
|
"available": True,
|
|
|
|
"connected": True
|
|
|
|
}
|
|
|
|
self.connected_hardware = [device]
|
|
|
|
|
|
|
|
def load_hardware_map(self, map_file):
|
|
|
|
with open(map_file, 'r') as stream:
|
|
|
|
try:
|
|
|
|
self.connected_hardware = yaml.safe_load(stream)
|
|
|
|
except yaml.YAMLError as exc:
|
|
|
|
print(exc)
|
|
|
|
for i in self.connected_hardware:
|
|
|
|
i['counter'] = 0
|
|
|
|
|
|
|
|
def add_configurations(self):
|
|
|
|
|
|
|
|
for board_root in self.board_roots:
|
|
|
|
board_root = os.path.abspath(board_root)
|
|
|
|
|
|
|
|
debug("Reading platform configuration files under %s..." %
|
|
|
|
board_root)
|
|
|
|
|
|
|
|
for file in glob.glob(os.path.join(board_root, "*", "*", "*.yaml")):
|
|
|
|
verbose("Found plaform configuration " + file)
|
|
|
|
try:
|
|
|
|
platform = Platform()
|
|
|
|
platform.load(file)
|
|
|
|
if platform.sanitycheck:
|
|
|
|
self.platforms.append(platform)
|
|
|
|
if platform.default:
|
|
|
|
self.default_platforms.append(platform.name)
|
|
|
|
|
|
|
|
except RuntimeError as e:
|
|
|
|
error("E: %s: can't load: %s" % (file, e))
|
|
|
|
self.load_errors += 1
|
|
|
|
|
2019-11-18 16:49:17 +01:00
|
|
|
|
|
|
|
def get_all_tests(self):
|
|
|
|
tests = []
|
|
|
|
for _, tc in self.testcases.items():
|
|
|
|
for case in tc.cases:
|
|
|
|
tests.append(case)
|
|
|
|
|
|
|
|
return tests
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
@staticmethod
|
|
|
|
def get_toolchain():
|
|
|
|
toolchain = os.environ.get("ZEPHYR_TOOLCHAIN_VARIANT", None) or \
|
|
|
|
os.environ.get("ZEPHYR_GCC_VARIANT", None)
|
|
|
|
|
|
|
|
if toolchain == "gccarmemb":
|
|
|
|
# Remove this translation when gccarmemb is no longer supported.
|
|
|
|
toolchain = "gnuarmemb"
|
|
|
|
|
|
|
|
try:
|
|
|
|
if not toolchain:
|
|
|
|
raise SanityRuntimeError("E: Variable ZEPHYR_TOOLCHAIN_VARIANT is not defined")
|
2018-08-16 00:12:28 +02:00
|
|
|
except Exception as e:
|
|
|
|
print(str(e))
|
|
|
|
sys.exit(2)
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
return toolchain
|
|
|
|
|
|
|
|
|
|
|
|
def add_testcases(self):
|
|
|
|
for root in self.roots:
|
|
|
|
root = os.path.abspath(root)
|
|
|
|
|
|
|
|
debug("Reading test case configuration files under %s..." %root)
|
|
|
|
|
|
|
|
for dirpath, dirnames, filenames in os.walk(root, topdown=True):
|
|
|
|
verbose("scanning %s" % dirpath)
|
|
|
|
if 'sample.yaml' in filenames:
|
|
|
|
filename = 'sample.yaml'
|
|
|
|
elif 'testcase.yaml' in filenames:
|
|
|
|
filename = 'testcase.yaml'
|
|
|
|
else:
|
|
|
|
continue
|
|
|
|
|
|
|
|
verbose("Found possible test case in " + dirpath)
|
|
|
|
|
|
|
|
dirnames[:] = []
|
|
|
|
tc_path = os.path.join(dirpath, filename)
|
|
|
|
self.add_testcase(tc_path, root)
|
|
|
|
|
|
|
|
def add_testcase(self, tc_data_file, root):
|
|
|
|
try:
|
|
|
|
parsed_data = SanityConfigParser(tc_data_file, self.tc_schema)
|
|
|
|
parsed_data.load()
|
|
|
|
|
|
|
|
tc_path = os.path.dirname(tc_data_file)
|
|
|
|
workdir = os.path.relpath(tc_path, root)
|
|
|
|
|
|
|
|
for name in parsed_data.tests.keys():
|
|
|
|
tc = TestCase()
|
|
|
|
tc.name = tc.get_unique(root, workdir, name)
|
|
|
|
|
|
|
|
tc_dict = parsed_data.get_test(name, testcase_valid_keys)
|
|
|
|
|
|
|
|
tc.source_dir = tc_path
|
|
|
|
tc.yamlfile = tc_data_file
|
|
|
|
|
|
|
|
tc.id = name
|
|
|
|
tc.type = tc_dict["type"]
|
|
|
|
tc.tags = tc_dict["tags"]
|
|
|
|
tc.extra_args = tc_dict["extra_args"]
|
|
|
|
tc.extra_configs = tc_dict["extra_configs"]
|
|
|
|
tc.arch_whitelist = tc_dict["arch_whitelist"]
|
|
|
|
tc.arch_exclude = tc_dict["arch_exclude"]
|
|
|
|
tc.skip = tc_dict["skip"]
|
|
|
|
tc.platform_exclude = tc_dict["platform_exclude"]
|
|
|
|
tc.platform_whitelist = tc_dict["platform_whitelist"]
|
|
|
|
tc.toolchain_exclude = tc_dict["toolchain_exclude"]
|
|
|
|
tc.toolchain_whitelist = tc_dict["toolchain_whitelist"]
|
|
|
|
tc.tc_filter = tc_dict["filter"]
|
|
|
|
tc.timeout = tc_dict["timeout"]
|
|
|
|
tc.harness = tc_dict["harness"]
|
|
|
|
tc.harness_config = tc_dict["harness_config"]
|
|
|
|
tc.build_only = tc_dict["build_only"]
|
|
|
|
tc.build_on_all = tc_dict["build_on_all"]
|
|
|
|
tc.slow = tc_dict["slow"]
|
|
|
|
tc.min_ram = tc_dict["min_ram"]
|
|
|
|
tc.depends_on = tc_dict["depends_on"]
|
|
|
|
tc.min_flash = tc_dict["min_flash"]
|
|
|
|
tc.extra_sections = tc_dict["extra_sections"]
|
|
|
|
|
|
|
|
tc.parse_subcases(tc_path)
|
|
|
|
|
|
|
|
if tc.name:
|
|
|
|
self.testcases[tc.name] = tc
|
|
|
|
|
|
|
|
except Exception as e:
|
|
|
|
error("E: %s: can't load (skipping): %s" % (tc_data_file, e))
|
|
|
|
self.load_errors += 1
|
|
|
|
return False
|
|
|
|
|
|
|
|
return True
|
|
|
|
|
|
|
|
|
|
|
|
def get_platform(self, name):
|
|
|
|
selected_platform = None
|
|
|
|
for platform in self.platforms:
|
|
|
|
if platform.name == name:
|
|
|
|
selected_platform = platform
|
|
|
|
break
|
|
|
|
return selected_platform
|
|
|
|
|
|
|
|
def get_last_failed(self):
|
|
|
|
last_run = os.path.join(options.outdir, "sanitycheck.csv")
|
|
|
|
try:
|
|
|
|
if not os.path.exists(last_run):
|
|
|
|
raise SanityRuntimeError("Couldn't find last sanitycheck run.: %s" %last_run)
|
|
|
|
except Exception as e:
|
|
|
|
print(str(e))
|
|
|
|
sys.exit(2)
|
|
|
|
|
|
|
|
total_tests = 0
|
|
|
|
with open(last_run, "r") as fp:
|
2015-07-17 21:03:52 +02:00
|
|
|
cr = csv.DictReader(fp)
|
2019-03-02 21:43:23 +01:00
|
|
|
instance_list = []
|
2015-07-17 21:03:52 +02:00
|
|
|
for row in cr:
|
2019-06-22 17:04:10 +02:00
|
|
|
total_tests += 1
|
2015-07-17 21:03:52 +02:00
|
|
|
if row["passed"] == "True":
|
|
|
|
continue
|
|
|
|
test = row["test"]
|
2019-03-02 21:43:23 +01:00
|
|
|
platform = self.get_platform(row["platform"])
|
|
|
|
instance = TestInstance(self.testcases[test], platform, self.outdir)
|
2019-09-12 00:03:35 +02:00
|
|
|
instance.create_overlay(platform)
|
2019-03-02 21:43:23 +01:00
|
|
|
instance_list.append(instance)
|
|
|
|
self.add_instances(instance_list)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
tests_to_run = len(self.instances)
|
|
|
|
info("%d tests passed already, retyring %d tests" %(total_tests - tests_to_run, tests_to_run))
|
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
def load_from_file(self, file):
|
2018-08-16 00:12:28 +02:00
|
|
|
try:
|
|
|
|
if not os.path.exists(file):
|
|
|
|
raise SanityRuntimeError(
|
|
|
|
"Couldn't find input file with list of tests.")
|
|
|
|
except Exception as e:
|
|
|
|
print(str(e))
|
|
|
|
sys.exit(2)
|
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
with open(file, "r") as fp:
|
2019-06-22 17:04:10 +02:00
|
|
|
cr = csv.DictReader(fp)
|
2017-09-02 18:32:08 +02:00
|
|
|
instance_list = []
|
|
|
|
for row in cr:
|
2019-06-22 17:04:10 +02:00
|
|
|
if row["arch"] == "arch":
|
|
|
|
continue
|
|
|
|
test = row["test"]
|
|
|
|
platform = self.get_platform(row["platform"])
|
|
|
|
instance = TestInstance(self.testcases[test], platform, self.outdir)
|
2019-09-12 00:03:35 +02:00
|
|
|
instance.create_overlay(platform)
|
2017-09-02 18:32:08 +02:00
|
|
|
instance_list.append(instance)
|
|
|
|
self.add_instances(instance_list)
|
|
|
|
|
2018-08-16 00:12:28 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def apply_filters(self):
|
2018-02-15 14:20:18 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
toolchain = self.get_toolchain()
|
2018-02-15 14:20:18 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
discards = {}
|
2017-12-30 17:48:43 +01:00
|
|
|
platform_filter = options.platform
|
2018-07-12 16:25:22 +02:00
|
|
|
testcase_filter = run_individual_tests
|
2017-12-30 17:48:43 +01:00
|
|
|
arch_filter = options.arch
|
|
|
|
tag_filter = options.tag
|
|
|
|
exclude_tag = options.exclude_tag
|
2017-10-04 22:14:27 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
verbose("platform filter: " + str(platform_filter))
|
|
|
|
verbose(" arch_filter: " + str(arch_filter))
|
|
|
|
verbose(" tag_filter: " + str(tag_filter))
|
2016-10-24 23:08:56 +02:00
|
|
|
verbose(" exclude_tag: " + str(exclude_tag))
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2016-03-22 18:08:35 +01:00
|
|
|
default_platforms = False
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if platform_filter:
|
|
|
|
platforms = list(filter(lambda p: p.name in platform_filter, self.platforms))
|
|
|
|
else:
|
|
|
|
platforms = self.platforms
|
|
|
|
|
|
|
|
if options.all:
|
2015-07-17 21:03:52 +02:00
|
|
|
info("Selecting all possible platforms per test case")
|
2016-03-22 18:08:35 +01:00
|
|
|
# When --all used, any --platform arguments ignored
|
2015-07-17 21:03:52 +02:00
|
|
|
platform_filter = []
|
2016-03-22 18:08:35 +01:00
|
|
|
elif not platform_filter:
|
|
|
|
info("Selecting default platforms per test case")
|
|
|
|
default_platforms = True
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
info("Building initial testcase list...")
|
2018-12-03 01:12:21 +01:00
|
|
|
|
2016-02-22 22:28:10 +01:00
|
|
|
for tc_name, tc in self.testcases.items():
|
2019-06-22 17:04:10 +02:00
|
|
|
# list of instances per testcase, aka configurations.
|
|
|
|
instance_list = []
|
|
|
|
for plat in platforms:
|
|
|
|
instance = TestInstance(tc, plat, self.outdir)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if (plat.arch == "unit") != (tc.type == "unit"):
|
|
|
|
# Discard silently
|
|
|
|
continue
|
2016-08-22 14:03:46 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.device_testing and instance.build_only:
|
|
|
|
discards[instance] = "Not runnable on device"
|
|
|
|
continue
|
2015-10-12 19:10:57 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.skip:
|
|
|
|
discards[instance] = "Skip filter"
|
|
|
|
continue
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.build_on_all and not platform_filter:
|
|
|
|
platform_filter = []
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tag_filter and not tc.tags.intersection(tag_filter):
|
|
|
|
discards[instance] = "Command line testcase tag filter"
|
|
|
|
continue
|
2016-10-24 23:08:56 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if exclude_tag and tc.tags.intersection(exclude_tag):
|
|
|
|
discards[instance] = "Command line testcase exclude filter"
|
|
|
|
continue
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if testcase_filter and tc_name not in testcase_filter:
|
|
|
|
discards[instance] = "Testcase name filter"
|
|
|
|
continue
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if arch_filter and plat.arch not in arch_filter:
|
|
|
|
discards[instance] = "Command line testcase arch filter"
|
|
|
|
continue
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.arch_whitelist and plat.arch not in tc.arch_whitelist:
|
|
|
|
discards[instance] = "Not in test case arch whitelist"
|
|
|
|
continue
|
2015-10-05 16:02:45 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.arch_exclude and plat.arch in tc.arch_exclude:
|
|
|
|
discards[instance] = "In test case arch exclude"
|
|
|
|
continue
|
2015-10-05 16:02:45 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.platform_exclude and plat.name in tc.platform_exclude:
|
|
|
|
discards[instance] = "In test case platform exclude"
|
|
|
|
continue
|
2017-06-28 00:05:30 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.toolchain_exclude and toolchain in tc.toolchain_exclude:
|
|
|
|
discards[instance] = "In test case toolchain exclude"
|
|
|
|
continue
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if platform_filter and plat.name not in platform_filter:
|
|
|
|
discards[instance] = "Command line platform filter"
|
|
|
|
continue
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.platform_whitelist and plat.name not in tc.platform_whitelist:
|
|
|
|
discards[instance] = "Not in testcase platform whitelist"
|
|
|
|
continue
|
2017-06-28 00:05:30 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.toolchain_whitelist and toolchain not in tc.toolchain_whitelist:
|
|
|
|
discards[instance] = "Not in testcase toolchain whitelist"
|
|
|
|
continue
|
2018-10-18 18:25:55 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if not plat.env_satisfied:
|
|
|
|
discards[instance] = "Environment ({}) not satisfied".format(", ".join(plat.env))
|
|
|
|
continue
|
2016-08-15 20:25:33 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if not options.force_toolchain \
|
|
|
|
and toolchain and (toolchain not in plat.supported_toolchains) \
|
|
|
|
and tc.type != 'unit':
|
|
|
|
discards[instance] = "Not supported by the toolchain"
|
|
|
|
continue
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if plat.ram < tc.min_ram:
|
|
|
|
discards[instance] = "Not enough RAM"
|
|
|
|
continue
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if tc.depends_on:
|
|
|
|
dep_intersection = tc.depends_on.intersection(set(plat.supported))
|
|
|
|
if dep_intersection != set(tc.depends_on):
|
|
|
|
discards[instance] = "No hardware support"
|
2017-04-05 00:47:49 +02:00
|
|
|
continue
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if plat.flash < tc.min_flash:
|
|
|
|
discards[instance] = "Not enough FLASH"
|
|
|
|
continue
|
2017-04-05 00:47:49 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if set(plat.ignore_tags) & tc.tags:
|
|
|
|
discards[instance] = "Excluded tags per platform"
|
2015-07-17 21:03:52 +02:00
|
|
|
continue
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# if nothing stopped us until now, it means this configuration
|
|
|
|
# needs to be added.
|
|
|
|
instance_list.append(instance)
|
|
|
|
|
|
|
|
# no configurations, so jump to next testcase
|
|
|
|
if not instance_list:
|
|
|
|
continue
|
|
|
|
|
|
|
|
# if sanitycheck was launched with no platform options at all, we
|
|
|
|
# take all default platforms
|
|
|
|
if default_platforms and not tc.build_on_all:
|
|
|
|
if tc.platform_whitelist:
|
|
|
|
a = set(self.default_platforms)
|
|
|
|
b = set(tc.platform_whitelist)
|
|
|
|
c = a.intersection(b)
|
|
|
|
if c:
|
|
|
|
aa = list( filter( lambda tc: tc.platform.name in c, instance_list))
|
|
|
|
self.add_instances(aa)
|
2017-04-05 00:47:49 +02:00
|
|
|
else:
|
2017-12-05 23:59:01 +01:00
|
|
|
self.add_instances(instance_list[:1])
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
instances = list( filter( lambda tc: tc.platform.default, instance_list))
|
|
|
|
self.add_instances(instances)
|
2018-04-08 15:57:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
for instance in list(filter(lambda tc: not tc.platform.default, instance_list)):
|
|
|
|
discards[instance] = "Not a default test platform"
|
|
|
|
|
|
|
|
else:
|
|
|
|
self.add_instances(instance_list)
|
|
|
|
|
|
|
|
for _, case in self.instances.items():
|
2019-09-12 00:03:35 +02:00
|
|
|
case.create_overlay(case.platform)
|
2018-04-08 15:57:48 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
self.discards = discards
|
2019-06-14 19:45:34 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
return discards
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def add_instances(self, instance_list):
|
|
|
|
for instance in instance_list:
|
|
|
|
self.instances[instance.name] = instance
|
2016-04-07 21:10:25 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def add_tasks_to_queue(self):
|
|
|
|
for instance in self.instances.values():
|
|
|
|
if options.test_only:
|
|
|
|
if instance.run:
|
|
|
|
pipeline.put({"op": "run", "test": instance, "status": "built"})
|
|
|
|
else:
|
|
|
|
if instance.status not in ['passed', 'skipped']:
|
|
|
|
instance.status = None
|
|
|
|
pipeline.put({"op": "cmake", "test": instance})
|
|
|
|
|
|
|
|
return "DONE FEEDING"
|
|
|
|
|
|
|
|
def execute(self):
|
|
|
|
def calc_one_elf_size(instance):
|
|
|
|
if instance.status not in ["failed", "skipped"]:
|
|
|
|
if instance.platform.type != "native":
|
|
|
|
size_calc = instance.calculate_sizes()
|
|
|
|
instance.metrics["ram_size"] = size_calc.get_ram_size()
|
|
|
|
instance.metrics["rom_size"] = size_calc.get_rom_size()
|
|
|
|
instance.metrics["unrecognized"] = size_calc.unrecognized_sections()
|
2018-07-14 13:11:02 +02:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
instance.metrics["ram_size"] = 0
|
|
|
|
instance.metrics["rom_size"] = 0
|
|
|
|
instance.metrics["unrecognized"] = []
|
|
|
|
|
|
|
|
instance.metrics["handler_time"] = instance.handler.duration if instance.handler else 0
|
|
|
|
|
|
|
|
info("Adding tasks to the queue...")
|
|
|
|
# We can use a with statement to ensure threads are cleaned up promptly
|
|
|
|
with BoundedExecutor(bound=self.jobs, max_workers=self.jobs) as executor:
|
|
|
|
|
|
|
|
# start a future for a thread which sends work in through the queue
|
|
|
|
future_to_test = {
|
|
|
|
executor.submit(self.add_tasks_to_queue): 'FEEDER DONE'}
|
|
|
|
|
|
|
|
while future_to_test:
|
|
|
|
# check for status of the futures which are currently working
|
|
|
|
done, _ = concurrent.futures.wait(
|
|
|
|
future_to_test, timeout=0.25,
|
|
|
|
return_when=concurrent.futures.FIRST_COMPLETED)
|
|
|
|
|
|
|
|
# if there is incoming work, start a new future
|
|
|
|
while not pipeline.empty():
|
|
|
|
# fetch a url from the queue
|
|
|
|
message = pipeline.get()
|
|
|
|
test = message['test']
|
|
|
|
|
|
|
|
# Start the load operation and mark the future with its URL
|
|
|
|
pb = ProjectBuilder(self, test)
|
|
|
|
future_to_test[executor.submit(pb.process, message)] = test.name
|
|
|
|
|
|
|
|
# process any completed futures
|
|
|
|
for future in done:
|
|
|
|
test = future_to_test[future]
|
|
|
|
try:
|
|
|
|
data = future.result()
|
|
|
|
except Exception as exc:
|
2019-11-05 15:01:49 +01:00
|
|
|
sys.exit('%r generated an exception: %s' % (test, exc))
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
if data:
|
|
|
|
verbose(data)
|
2016-04-07 21:10:25 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# remove the now completed future
|
|
|
|
del future_to_test[future]
|
2016-04-07 21:10:25 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.enable_size_report and not options.cmake_only:
|
2019-01-03 19:50:53 +01:00
|
|
|
# Parallelize size calculation
|
2019-06-22 17:04:10 +02:00
|
|
|
executor = concurrent.futures.ThreadPoolExecutor(self.jobs)
|
|
|
|
futures = [executor.submit(calc_one_elf_size, instance)
|
|
|
|
for instance in self.instances.values()]
|
2019-01-03 19:50:53 +01:00
|
|
|
concurrent.futures.wait(futures)
|
|
|
|
else:
|
2017-09-02 18:32:08 +02:00
|
|
|
for instance in self.instances.values():
|
2019-06-22 17:04:10 +02:00
|
|
|
instance.metrics["ram_size"] = 0
|
|
|
|
instance.metrics["rom_size"] = 0
|
|
|
|
instance.metrics["handler_time"] = instance.handler.duration if instance.handler else 0
|
|
|
|
instance.metrics["unrecognized"] = []
|
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def discard_report(self, filename):
|
2018-08-16 00:12:28 +02:00
|
|
|
|
|
|
|
try:
|
|
|
|
if self.discards is None:
|
|
|
|
raise SanityRuntimeError("apply_filters() hasn't been run!")
|
|
|
|
except Exception as e:
|
|
|
|
error(str(e))
|
|
|
|
sys.exit(2)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
with open(filename, "wt") as csvfile:
|
2015-07-17 21:03:52 +02:00
|
|
|
fieldnames = ["test", "arch", "platform", "reason"]
|
|
|
|
cw = csv.DictWriter(csvfile, fieldnames, lineterminator=os.linesep)
|
|
|
|
cw.writeheader()
|
2019-04-05 23:14:21 +02:00
|
|
|
for instance, reason in sorted(self.discards.items()):
|
2019-06-22 17:04:10 +02:00
|
|
|
rowdict = {"test": instance.testcase.name,
|
2017-12-05 21:28:44 +01:00
|
|
|
"arch": instance.platform.arch,
|
|
|
|
"platform": instance.platform.name,
|
|
|
|
"reason": reason}
|
2015-07-17 21:03:52 +02:00
|
|
|
cw.writerow(rowdict)
|
|
|
|
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def target_report(self, outdir):
|
2018-02-16 03:07:24 +01:00
|
|
|
run = "Sanitycheck"
|
|
|
|
eleTestsuite = None
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
platforms = {inst.platform.name for _,inst in self.instances.items()}
|
|
|
|
for platform in platforms:
|
|
|
|
errors = 0
|
|
|
|
passes = 0
|
|
|
|
fails = 0
|
|
|
|
duration = 0
|
|
|
|
skips = 0
|
|
|
|
for _, instance in self.instances.items():
|
|
|
|
if instance.platform.name != platform:
|
|
|
|
continue
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler_time = instance.metrics.get('handler_time', 0)
|
|
|
|
duration += handler_time
|
|
|
|
for k in instance.results.keys():
|
|
|
|
if instance.results[k] == 'PASS':
|
|
|
|
passes += 1
|
|
|
|
elif instance.results[k] == 'BLOCK':
|
|
|
|
errors += 1
|
|
|
|
elif instance.results[k] == 'SKIP':
|
|
|
|
skips += 1
|
|
|
|
else:
|
|
|
|
fails += 1
|
2018-04-08 20:30:16 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
eleTestsuites = ET.Element('testsuites')
|
|
|
|
eleTestsuite = ET.SubElement(eleTestsuites, 'testsuite',
|
|
|
|
name=run, time="%f" % duration,
|
|
|
|
tests="%d" % (errors + passes + fails),
|
|
|
|
failures="%d" % fails,
|
|
|
|
errors="%d" % errors, skipped="%d" %skips)
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler_time = 0
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# print out test results
|
|
|
|
for _, instance in self.instances.items():
|
|
|
|
if instance.platform.name != platform:
|
|
|
|
continue
|
|
|
|
handler_time = instance.metrics.get('handler_time', 0)
|
|
|
|
for k in instance.results.keys():
|
|
|
|
eleTestcase = ET.SubElement(
|
|
|
|
eleTestsuite, 'testcase', classname="%s:%s" %(instance.platform.name, os.path.basename(instance.testcase.name)),
|
|
|
|
name="%s" % (k), time="%f" %handler_time)
|
|
|
|
if instance.results[k] in ['FAIL', 'BLOCK']:
|
|
|
|
el = None
|
|
|
|
|
|
|
|
if instance.results[k] == 'FAIL':
|
|
|
|
el = ET.SubElement(
|
|
|
|
eleTestcase,
|
|
|
|
'failure',
|
|
|
|
type="failure",
|
|
|
|
message="failed")
|
|
|
|
elif instance.results[k] == 'BLOCK':
|
|
|
|
el = ET.SubElement(
|
|
|
|
eleTestcase,
|
|
|
|
'error',
|
|
|
|
type="failure",
|
|
|
|
message="failed")
|
|
|
|
p = os.path.join(options.outdir, instance.platform.name, instance.testcase.name)
|
|
|
|
log_file = os.path.join(p, "handler.log")
|
|
|
|
|
|
|
|
if os.path.exists(log_file):
|
|
|
|
with open(log_file, "rb") as f:
|
|
|
|
log = f.read().decode("utf-8")
|
|
|
|
filtered_string = ''.join(filter(lambda x: x in string.printable, log))
|
|
|
|
el.text = filtered_string
|
|
|
|
|
|
|
|
elif instance.results[k] == 'SKIP':
|
2018-02-16 03:07:24 +01:00
|
|
|
el = ET.SubElement(
|
|
|
|
eleTestcase,
|
2019-06-22 17:04:10 +02:00
|
|
|
'skipped',
|
|
|
|
type="skipped",
|
|
|
|
message="Skipped")
|
2018-04-08 20:30:16 +02:00
|
|
|
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
result = ET.tostring(eleTestsuites)
|
|
|
|
with open(os.path.join(outdir, platform + ".xml"), 'wb') as f:
|
|
|
|
f.write(result)
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def xunit_report(self, filename):
|
2017-04-13 20:44:48 +02:00
|
|
|
fails = 0
|
|
|
|
passes = 0
|
|
|
|
errors = 0
|
2019-06-22 17:04:10 +02:00
|
|
|
skips = 0
|
|
|
|
duration = 0
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
for instance in self.instances.values():
|
|
|
|
handler_time = instance.metrics.get('handler_time', 0)
|
|
|
|
duration += handler_time
|
|
|
|
if instance.status == "failed":
|
|
|
|
if instance.reason in ['build_error', 'handler_crash']:
|
2017-04-13 20:44:48 +02:00
|
|
|
errors += 1
|
|
|
|
else:
|
|
|
|
fails += 1
|
2019-06-22 17:04:10 +02:00
|
|
|
elif instance.status == 'skipped':
|
|
|
|
skips += 1
|
2017-04-13 20:44:48 +02:00
|
|
|
else:
|
|
|
|
passes += 1
|
|
|
|
|
|
|
|
run = "Sanitycheck"
|
|
|
|
eleTestsuite = None
|
2017-12-30 17:48:43 +01:00
|
|
|
append = options.only_failed
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# When we re-run the tests, we re-use the results and update only with
|
|
|
|
# the newly run tests.
|
2017-05-07 14:51:02 +02:00
|
|
|
if os.path.exists(filename) and append:
|
2017-04-13 20:44:48 +02:00
|
|
|
tree = ET.parse(filename)
|
|
|
|
eleTestsuites = tree.getroot()
|
2017-12-05 21:28:44 +01:00
|
|
|
eleTestsuite = tree.findall('testsuite')[0]
|
2017-04-13 20:44:48 +02:00
|
|
|
else:
|
|
|
|
eleTestsuites = ET.Element('testsuites')
|
2017-12-05 21:28:44 +01:00
|
|
|
eleTestsuite = ET.SubElement(eleTestsuites, 'testsuite',
|
2019-06-22 17:04:10 +02:00
|
|
|
name=run, time="%f" % duration,
|
|
|
|
tests="%d" % (errors + passes + fails + skips),
|
2017-12-05 21:28:44 +01:00
|
|
|
failures="%d" % fails,
|
2019-06-22 17:04:10 +02:00
|
|
|
errors="%d" %(errors), skip="%s" %(skips))
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
for instance in self.instances.values():
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# remove testcases that are a re-run
|
2017-04-13 20:44:48 +02:00
|
|
|
if append:
|
|
|
|
for tc in eleTestsuite.findall('testcase'):
|
2017-12-05 21:28:44 +01:00
|
|
|
if tc.get('classname') == "%s:%s" % (
|
2019-06-22 17:04:10 +02:00
|
|
|
instance.platform.name, instance.testcase.name):
|
2017-04-13 20:44:48 +02:00
|
|
|
eleTestsuite.remove(tc)
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
handler_time = 0
|
|
|
|
if instance.status != "failed" and instance.handler:
|
|
|
|
handler_time = instance.metrics.get("handler_time", 0)
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
eleTestcase = ET.SubElement(
|
|
|
|
eleTestsuite, 'testcase', classname="%s:%s" %
|
2019-06-22 17:04:10 +02:00
|
|
|
(instance.platform.name, instance.testcase.name), name="%s" %
|
|
|
|
(instance.testcase.name), time="%f" %handler_time)
|
|
|
|
|
|
|
|
if instance.status == "failed":
|
2017-12-05 21:28:44 +01:00
|
|
|
failure = ET.SubElement(
|
|
|
|
eleTestcase,
|
|
|
|
'failure',
|
|
|
|
type="failure",
|
2019-06-22 17:04:10 +02:00
|
|
|
message=instance.reason)
|
|
|
|
p = ("%s/%s/%s" % (options.outdir, instance.platform.name, instance.testcase.name))
|
2017-04-13 20:44:48 +02:00
|
|
|
bl = os.path.join(p, "build.log")
|
2019-02-05 13:38:32 +01:00
|
|
|
hl = os.path.join(p, "handler.log")
|
|
|
|
log_file = bl
|
2019-06-22 17:04:10 +02:00
|
|
|
if instance.reason != 'Build error':
|
2019-02-05 13:38:32 +01:00
|
|
|
if os.path.exists(hl):
|
|
|
|
log_file = hl
|
|
|
|
else:
|
|
|
|
log_file = bl
|
2017-05-01 22:33:43 +02:00
|
|
|
|
2019-02-05 13:38:32 +01:00
|
|
|
if os.path.exists(log_file):
|
|
|
|
with open(log_file, "rb") as f:
|
2017-12-30 04:09:03 +01:00
|
|
|
log = f.read().decode("utf-8")
|
2018-10-15 15:45:59 +02:00
|
|
|
filtered_string = ''.join(filter(lambda x: x in string.printable, log))
|
|
|
|
failure.text = filtered_string
|
2018-09-23 16:41:59 +02:00
|
|
|
f.close()
|
2019-06-22 17:04:10 +02:00
|
|
|
elif instance.status == "skipped":
|
|
|
|
ET.SubElement( eleTestcase, 'skipped', type="skipped", message="Skipped")
|
2017-04-13 20:44:48 +02:00
|
|
|
|
|
|
|
result = ET.tostring(eleTestsuites)
|
2019-06-22 17:04:10 +02:00
|
|
|
with open(filename, 'wb') as report:
|
|
|
|
report.write(result)
|
2017-04-13 20:44:48 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
def csv_report(self, filename):
|
2016-02-22 22:28:10 +01:00
|
|
|
with open(filename, "wt") as csvfile:
|
2015-07-17 21:03:52 +02:00
|
|
|
fieldnames = ["test", "arch", "platform", "passed", "status",
|
2019-06-22 17:04:10 +02:00
|
|
|
"extra_args", "handler", "handler_time", "ram_size",
|
2015-07-17 21:03:52 +02:00
|
|
|
"rom_size"]
|
|
|
|
cw = csv.DictWriter(csvfile, fieldnames, lineterminator=os.linesep)
|
|
|
|
cw.writeheader()
|
2019-06-22 17:04:10 +02:00
|
|
|
for instance in sorted(self.instances.values()):
|
|
|
|
rowdict = {"test": instance.testcase.name,
|
|
|
|
"arch": instance.platform.arch,
|
|
|
|
"platform": instance.platform.name,
|
|
|
|
"extra_args": " ".join(instance.testcase.extra_args),
|
|
|
|
"handler": instance.platform.simulation}
|
|
|
|
|
|
|
|
if instance.status in ["failed", "timeout"]:
|
2015-07-17 21:03:52 +02:00
|
|
|
rowdict["passed"] = False
|
2019-06-22 17:04:10 +02:00
|
|
|
rowdict["status"] = instance.reason
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
|
|
|
rowdict["passed"] = True
|
2019-06-22 17:04:10 +02:00
|
|
|
if instance.handler:
|
|
|
|
rowdict["handler_time"] = instance.metrics.get("handler_time", 0)
|
|
|
|
ram_size = instance.metrics.get("ram_size", 0)
|
|
|
|
rom_size = instance.metrics.get("rom_size", 0)
|
|
|
|
rowdict["ram_size"] = ram_size
|
|
|
|
rowdict["rom_size"] = rom_size
|
2015-07-17 21:03:52 +02:00
|
|
|
cw.writerow(rowdict)
|
|
|
|
|
|
|
|
|
2019-11-18 17:16:21 +01:00
|
|
|
def get_testcase(self, identifier):
|
|
|
|
results = []
|
|
|
|
for _, tc in self.testcases.items():
|
|
|
|
for case in tc.cases:
|
|
|
|
if case == identifier:
|
|
|
|
results.append(tc)
|
|
|
|
return results
|
|
|
|
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def parse_arguments():
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser = argparse.ArgumentParser(
|
|
|
|
description=__doc__,
|
|
|
|
formatter_class=argparse.RawDescriptionHelpFormatter)
|
2016-10-25 01:00:58 +02:00
|
|
|
parser.fromfile_prefix_chars = "+"
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-03-12 19:37:53 +01:00
|
|
|
case_select = parser.add_argument_group("Test case selection",
|
|
|
|
"""
|
|
|
|
Artificially long but functional example:
|
|
|
|
$ ./scripts/sanitycheck -v \\
|
2019-04-08 23:02:34 +02:00
|
|
|
--testcase-root tests/ztest/base \\
|
|
|
|
--testcase-root tests/kernel \\
|
2019-03-12 19:37:53 +01:00
|
|
|
--test tests/ztest/base/testing.ztest.verbose_0 \\
|
|
|
|
--test tests/kernel/fifo/fifo_api/kernel.fifo.poll
|
|
|
|
|
|
|
|
"kernel.fifo.poll" is one of the test section names in
|
|
|
|
__/fifo_api/testcase.yaml
|
|
|
|
""")
|
2019-03-08 21:39:11 +01:00
|
|
|
|
2018-07-22 02:29:08 +02:00
|
|
|
parser.add_argument("--force-toolchain", action="store_true",
|
|
|
|
help="Do not filter based on toolchain, use the set "
|
|
|
|
" toolchain unconditionally")
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-p", "--platform", action="append",
|
|
|
|
help="Platform filter for testing. This option may be used multiple "
|
|
|
|
"times. Testcases will only be built/run on the platforms "
|
|
|
|
"specified. If this option is not used, then platforms marked "
|
|
|
|
"as default in the platform metadata file will be chosen "
|
|
|
|
"to build and test. ")
|
|
|
|
parser.add_argument(
|
|
|
|
"-a", "--arch", action="append",
|
|
|
|
help="Arch filter for testing. Takes precedence over --platform. "
|
|
|
|
"If unspecified, test all arches. Multiple invocations "
|
|
|
|
"are treated as a logical 'or' relationship")
|
|
|
|
parser.add_argument(
|
|
|
|
"-t", "--tag", action="append",
|
|
|
|
help="Specify tags to restrict which tests to run by tag value. "
|
|
|
|
"Default is to not do any tag filtering. Multiple invocations "
|
|
|
|
"are treated as a logical 'or' relationship")
|
2016-10-24 23:08:56 +02:00
|
|
|
parser.add_argument("-e", "--exclude-tag", action="append",
|
2017-12-05 21:28:44 +01:00
|
|
|
help="Specify tags of tests that should not run. "
|
|
|
|
"Default is to run all tests with all tags.")
|
2019-03-08 21:39:11 +01:00
|
|
|
case_select.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-f",
|
|
|
|
"--only-failed",
|
|
|
|
action="store_true",
|
|
|
|
help="Run only those tests that failed the previous sanity check "
|
|
|
|
"invocation.")
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
2019-06-22 17:04:10 +02:00
|
|
|
"--retry-failed", type=int, default=0,
|
|
|
|
help="Retry failing tests again, up to the number of times specified.")
|
2018-07-12 16:25:22 +02:00
|
|
|
|
2019-03-12 01:28:36 +01:00
|
|
|
test_xor_subtest = case_select.add_mutually_exclusive_group()
|
|
|
|
|
|
|
|
test_xor_subtest.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-s", "--test", action="append",
|
|
|
|
help="Run only the specified test cases. These are named by "
|
2019-04-08 23:02:34 +02:00
|
|
|
"<path/relative/to/Zephyr/base/section.name.in.testcase.yaml>")
|
2018-07-12 16:25:22 +02:00
|
|
|
|
2019-03-12 01:28:36 +01:00
|
|
|
test_xor_subtest.add_argument(
|
2018-07-12 16:25:22 +02:00
|
|
|
"--sub-test", action="append",
|
2019-03-12 19:37:53 +01:00
|
|
|
help="""Recursively find sub-test functions and run the entire
|
|
|
|
test section where they were found, including all sibling test
|
|
|
|
functions. Sub-tests are named by:
|
|
|
|
section.name.in.testcase.yaml.function_name_without_test_prefix
|
|
|
|
Example: kernel.fifo.poll.fifo_loop
|
|
|
|
""")
|
2018-07-12 16:25:22 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-l", "--all", action="store_true",
|
|
|
|
help="Build/test on all platforms. Any --platform arguments "
|
|
|
|
"ignored.")
|
|
|
|
|
|
|
|
parser.add_argument(
|
2019-06-22 17:04:10 +02:00
|
|
|
"-o", "--report-dir",
|
|
|
|
help="""Output reports containing results of the test run into the
|
|
|
|
specified directory.
|
|
|
|
The output will be both in CSV and JUNIT format
|
|
|
|
(sanitycheck.csv and sanitycheck.xml).
|
|
|
|
""")
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
2019-06-22 17:04:10 +02:00
|
|
|
"--report-name",
|
|
|
|
help="""Create a report with a custom name.
|
|
|
|
""")
|
|
|
|
|
|
|
|
parser.add_argument("--report-excluded",
|
|
|
|
action="store_true",
|
|
|
|
help="""List all tests that are never run based on current scope and
|
|
|
|
coverage. If you are looking for accurate results, run this with
|
|
|
|
--all, but this will take a while...""")
|
|
|
|
|
2016-04-08 20:07:32 +02:00
|
|
|
parser.add_argument("--compare-report",
|
2017-12-05 21:28:44 +01:00
|
|
|
help="Use this report file for size comparison")
|
|
|
|
|
|
|
|
parser.add_argument(
|
|
|
|
"-B", "--subset",
|
|
|
|
help="Only run a subset of the tests, 1/4 for running the first 25%%, "
|
|
|
|
"3/5 means run the 3rd fifth of the total. "
|
|
|
|
"This option is useful when running a large number of tests on "
|
|
|
|
"different hosts to speed up execution time.")
|
2017-12-30 19:01:06 +01:00
|
|
|
|
|
|
|
parser.add_argument(
|
|
|
|
"-N", "--ninja", action="store_true",
|
2018-03-06 14:15:11 +01:00
|
|
|
help="Use the Ninja generator with CMake")
|
2017-12-30 19:01:06 +01:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-y", "--dry-run", action="store_true",
|
2019-11-20 12:47:27 +01:00
|
|
|
help="""Create the filtered list of test cases, but don't actually
|
|
|
|
run them. Useful if you're just interested in the discard report
|
|
|
|
generated for every run and saved in the specified output
|
|
|
|
directory (sanitycheck_discard.csv).
|
|
|
|
""")
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2018-02-24 15:32:14 +01:00
|
|
|
parser.add_argument("--list-tags", action="store_true",
|
|
|
|
help="list all tags in selected tests")
|
|
|
|
|
2019-03-08 21:39:11 +01:00
|
|
|
case_select.add_argument("--list-tests", action="store_true",
|
2019-03-12 19:37:53 +01:00
|
|
|
help="""List of all sub-test functions recursively found in
|
|
|
|
all --testcase-root arguments. Note different sub-tests can share
|
|
|
|
the same section name and come from different directories.
|
|
|
|
The output is flattened and reports --sub-test names only,
|
|
|
|
not their directories. For instance net.socket.getaddrinfo_ok
|
|
|
|
and net.socket.fd_set belong to different directories.
|
|
|
|
""")
|
2018-04-15 06:12:58 +02:00
|
|
|
|
2019-12-01 19:55:11 +01:00
|
|
|
case_select.add_argument("--test-tree", action="store_true",
|
|
|
|
help="""Output the testsuite in a tree form""")
|
|
|
|
|
2019-11-18 17:16:21 +01:00
|
|
|
case_select.add_argument("--list-test-duplicates", action="store_true",
|
|
|
|
help="""List tests with duplicate identifiers.
|
|
|
|
""")
|
|
|
|
|
2018-06-01 16:51:08 +02:00
|
|
|
parser.add_argument("--export-tests", action="store",
|
|
|
|
metavar="FILENAME",
|
|
|
|
help="Export tests case meta-data to a file in CSV format.")
|
|
|
|
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-04-11 14:38:21 +02:00
|
|
|
parser.add_argument("--timestamps",
|
|
|
|
action="store_true",
|
|
|
|
help="Print all messages with time stamps")
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-r", "--release", action="store_true",
|
|
|
|
help="Update the benchmark database with the results of this test "
|
|
|
|
"run. Intended to be run by CI when tagging an official "
|
|
|
|
"release. This database is used as a basis for comparison "
|
|
|
|
"when looking for deltas in metrics such as footprint")
|
2015-07-17 21:03:52 +02:00
|
|
|
parser.add_argument("-w", "--warnings-as-errors", action="store_true",
|
2017-12-05 21:28:44 +01:00
|
|
|
help="Treat warning conditions as errors")
|
|
|
|
parser.add_argument(
|
|
|
|
"-v",
|
|
|
|
"--verbose",
|
|
|
|
action="count",
|
|
|
|
default=0,
|
|
|
|
help="Emit debugging information, call multiple times to increase "
|
|
|
|
"verbosity")
|
|
|
|
parser.add_argument(
|
|
|
|
"-i", "--inline-logs", action="store_true",
|
|
|
|
help="Upon test failure, print relevant log data to stdout "
|
|
|
|
"instead of just a path to it")
|
2016-11-29 19:43:40 +01:00
|
|
|
parser.add_argument("--log-file", metavar="FILENAME", action="store",
|
2017-12-05 21:28:44 +01:00
|
|
|
help="log also to file")
|
|
|
|
parser.add_argument(
|
|
|
|
"-m", "--last-metrics", action="store_true",
|
|
|
|
help="Instead of comparing metrics from the last --release, "
|
|
|
|
"compare with the results of the previous sanity check "
|
|
|
|
"invocation")
|
|
|
|
parser.add_argument(
|
|
|
|
"-u",
|
|
|
|
"--no-update",
|
|
|
|
action="store_true",
|
|
|
|
help="do not update the results of the last run of the sanity "
|
|
|
|
"checks")
|
2019-03-08 21:39:11 +01:00
|
|
|
case_select.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-F",
|
|
|
|
"--load-tests",
|
|
|
|
metavar="FILENAME",
|
|
|
|
action="store",
|
2019-03-12 19:37:53 +01:00
|
|
|
help="Load list of tests and platforms to be run from file.")
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-03-08 21:39:11 +01:00
|
|
|
case_select.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-E",
|
|
|
|
"--save-tests",
|
|
|
|
metavar="FILENAME",
|
|
|
|
action="store",
|
2019-03-21 00:48:49 +01:00
|
|
|
help="Append list of tests and platforms to be run to file.")
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2019-02-08 17:19:10 +01:00
|
|
|
test_or_build = parser.add_mutually_exclusive_group()
|
|
|
|
test_or_build.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-b", "--build-only", action="store_true",
|
|
|
|
help="Only build the code, do not execute any of it in QEMU")
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2019-02-08 17:19:10 +01:00
|
|
|
test_or_build.add_argument(
|
|
|
|
"--test-only", action="store_true",
|
|
|
|
help="""Only run device tests with current artifacts, do not build
|
|
|
|
the code""")
|
2019-06-15 16:36:06 +02:00
|
|
|
parser.add_argument(
|
|
|
|
"--cmake-only", action="store_true",
|
2019-06-22 17:04:10 +02:00
|
|
|
help="Only run cmake, do not build or run.")
|
2019-06-15 16:36:06 +02:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-j", "--jobs", type=int,
|
2019-07-03 21:49:42 +02:00
|
|
|
help="Number of jobs for building, defaults to number of CPU threads, "
|
|
|
|
"overcommited by factor 2 when --build-only")
|
2018-02-19 17:57:03 +01:00
|
|
|
|
2018-02-20 15:37:24 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"--show-footprint", action="store_true",
|
|
|
|
help="Show footprint statistics and deltas since last release."
|
|
|
|
)
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-H", "--footprint-threshold", type=float, default=5,
|
|
|
|
help="When checking test case footprint sizes, warn the user if "
|
|
|
|
"the new app size is greater then the specified percentage "
|
|
|
|
"from the last release. Default is 5. 0 to warn on any "
|
|
|
|
"increase on app size")
|
|
|
|
parser.add_argument(
|
|
|
|
"-D", "--all-deltas", action="store_true",
|
|
|
|
help="Show all footprint deltas, positive or negative. Implies "
|
|
|
|
"--footprint-threshold=0")
|
2018-04-25 08:24:25 +02:00
|
|
|
parser.add_argument(
|
|
|
|
"-O", "--outdir",
|
2019-06-22 17:04:10 +02:00
|
|
|
default=os.path.join(os.getcwd(),"sanity-out"),
|
2018-04-25 08:24:25 +02:00
|
|
|
help="Output directory for logs and binaries. "
|
2018-11-20 17:51:34 +01:00
|
|
|
"Default is 'sanity-out' in the current directory. "
|
2018-04-25 08:24:25 +02:00
|
|
|
"This directory will be deleted unless '--no-clean' is set.")
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-n", "--no-clean", action="store_true",
|
|
|
|
help="Do not delete the outdir before building. Will result in "
|
|
|
|
"faster compilation since builds will be incremental")
|
2019-03-08 21:39:11 +01:00
|
|
|
case_select.add_argument(
|
2017-12-05 21:28:44 +01:00
|
|
|
"-T", "--testcase-root", action="append", default=[],
|
|
|
|
help="Base directory to recursively search for test cases. All "
|
|
|
|
"testcase.yaml files under here will be processed. May be "
|
2019-03-12 19:37:53 +01:00
|
|
|
"called multiple times. Defaults to the 'samples/' and "
|
|
|
|
"'tests/' directories at the base of the Zephyr tree.")
|
2018-11-14 14:46:49 +01:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
board_root_list = ["%s/boards" % ZEPHYR_BASE,
|
|
|
|
"%s/scripts/sanity_chk/boards" % ZEPHYR_BASE]
|
2018-11-14 14:46:49 +01:00
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
2019-10-23 14:25:09 +02:00
|
|
|
"-A", "--board-root", action="append", default=board_root_list,
|
2019-06-22 17:04:10 +02:00
|
|
|
help="""Directory to search for board configuration files. All .yaml
|
|
|
|
files in the directory will be processed. The directory should have the same
|
|
|
|
structure in the main Zephyr tree: boards/<arch>/<board_name>/""")
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"-z", "--size", action="append",
|
|
|
|
help="Don't run sanity checks. Instead, produce a report to "
|
|
|
|
"stdout detailing RAM/ROM sizes on the specified filenames. "
|
|
|
|
"All other command line arguments ignored.")
|
|
|
|
parser.add_argument(
|
|
|
|
"-S", "--enable-slow", action="store_true",
|
|
|
|
help="Execute time-consuming test cases that have been marked "
|
|
|
|
"as 'slow' in testcase.yaml. Normally these are only built.")
|
2018-11-13 13:36:19 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"--disable-unrecognized-section-test", action="store_true",
|
|
|
|
default=False,
|
|
|
|
help="Skip the 'unrecognized section' test.")
|
2016-07-20 20:52:04 +02:00
|
|
|
parser.add_argument("-R", "--enable-asserts", action="store_true",
|
2018-01-26 17:22:20 +01:00
|
|
|
default=True,
|
2018-05-24 22:33:09 +02:00
|
|
|
help="deprecated, left for compatibility")
|
2018-01-26 17:22:20 +01:00
|
|
|
parser.add_argument("--disable-asserts", action="store_false",
|
|
|
|
dest="enable_asserts",
|
2018-05-24 22:33:09 +02:00
|
|
|
help="deprecated, left for compatibility")
|
2016-11-30 20:25:44 +01:00
|
|
|
parser.add_argument("-Q", "--error-on-deprecations", action="store_false",
|
2017-12-05 21:28:44 +01:00
|
|
|
help="Error on deprecation warnings.")
|
2019-06-22 17:04:10 +02:00
|
|
|
parser.add_argument("--enable-size-report", action="store_true",
|
|
|
|
help="Enable expensive computation of RAM/ROM segment sizes.")
|
2017-11-09 12:25:02 +01:00
|
|
|
|
|
|
|
parser.add_argument(
|
|
|
|
"-x", "--extra-args", action="append", default=[],
|
2018-02-10 17:40:33 +01:00
|
|
|
help="""Extra CMake cache entries to define when building test cases.
|
|
|
|
May be called multiple times. The key-value entries will be
|
2017-11-09 12:25:02 +01:00
|
|
|
prefixed with -D before being passed to CMake.
|
|
|
|
|
|
|
|
E.g
|
|
|
|
"sanitycheck -x=USE_CCACHE=0"
|
|
|
|
will translate to
|
|
|
|
"cmake -DUSE_CCACHE=0"
|
|
|
|
|
|
|
|
which will ultimately disable ccache.
|
|
|
|
"""
|
|
|
|
)
|
2019-06-18 18:37:46 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
parser.add_argument(
|
|
|
|
"--device-testing", action="store_true",
|
|
|
|
help="Test on device directly. Specify the serial device to "
|
|
|
|
"use with the --device-serial option.")
|
|
|
|
|
|
|
|
parser.add_argument(
|
|
|
|
"-X", "--fixture", action="append", default=[],
|
|
|
|
help="Specify a fixture that a board might support")
|
|
|
|
parser.add_argument(
|
|
|
|
"--device-serial",
|
|
|
|
help="Serial device for accessing the board (e.g., /dev/ttyACM0)")
|
|
|
|
|
|
|
|
parser.add_argument("--generate-hardware-map",
|
|
|
|
help="""Probe serial devices connected to this platform
|
|
|
|
and create a hardware map file to be used with
|
|
|
|
--device-testing
|
|
|
|
""")
|
|
|
|
|
|
|
|
parser.add_argument("--hardware-map",
|
|
|
|
help="""Load hardware map from a file. This will be used
|
|
|
|
for testing on hardware that is listed in the file.
|
|
|
|
""")
|
|
|
|
|
2019-02-08 17:09:04 +01:00
|
|
|
parser.add_argument(
|
|
|
|
"--west-flash", nargs='?', const=[],
|
|
|
|
help="""Uses west instead of ninja or make to flash when running with
|
2019-09-02 15:25:20 +02:00
|
|
|
--device-testing. Supports comma-separated argument list.
|
2019-02-08 17:09:04 +01:00
|
|
|
|
2019-07-09 23:21:30 +02:00
|
|
|
E.g "sanitycheck --device-testing --device-serial /dev/ttyACM0
|
2019-09-02 15:25:20 +02:00
|
|
|
--west-flash="--board-id=foobar,--erase"
|
|
|
|
will translate to "west flash -- --board-id=foobar --erase"
|
2019-07-09 23:21:30 +02:00
|
|
|
|
|
|
|
NOTE: device-testing must be enabled to use this option.
|
2019-02-08 17:09:04 +01:00
|
|
|
"""
|
|
|
|
)
|
2019-06-18 18:37:46 +02:00
|
|
|
parser.add_argument(
|
|
|
|
"--west-runner",
|
|
|
|
help="""Uses the specified west runner instead of default when running
|
|
|
|
with --west-flash.
|
|
|
|
|
|
|
|
E.g "sanitycheck --device-testing --device-serial /dev/ttyACM0
|
|
|
|
--west-flash --west-runner=pyocd"
|
|
|
|
will translate to "west flash --runner pyocd"
|
|
|
|
|
|
|
|
NOTE: west-flash must be enabled to use this option.
|
|
|
|
"""
|
|
|
|
)
|
2019-09-12 00:03:35 +02:00
|
|
|
|
|
|
|
valgrind_asan_group = parser.add_mutually_exclusive_group()
|
|
|
|
|
|
|
|
valgrind_asan_group.add_argument(
|
2019-10-11 16:32:45 +02:00
|
|
|
"--enable-valgrind", action="store_true",
|
|
|
|
help="""Run binary through valgrind and check for several memory access
|
2019-09-12 00:03:35 +02:00
|
|
|
errors. Valgrind needs to be installed on the host. This option only
|
|
|
|
works with host binaries such as those generated for the native_posix
|
|
|
|
configuration and is mutual exclusive with --enable-asan.
|
|
|
|
""")
|
|
|
|
|
|
|
|
valgrind_asan_group.add_argument(
|
|
|
|
"--enable-asan", action="store_true",
|
|
|
|
help="""Enable address sanitizer to check for several memory access
|
|
|
|
errors. Libasan needs to be installed on the host. This option only
|
|
|
|
works with host binaries such as those generated for the native_posix
|
|
|
|
configuration and is mutual exclusive with --enable-valgrind.
|
|
|
|
""")
|
|
|
|
|
|
|
|
parser.add_argument(
|
|
|
|
"--enable-lsan", action="store_true",
|
|
|
|
help="""Enable leak sanitizer to check for heap memory leaks.
|
|
|
|
Libasan needs to be installed on the host. This option only
|
2019-10-11 16:32:45 +02:00
|
|
|
works with host binaries such as those generated for the native_posix
|
2019-09-12 00:03:35 +02:00
|
|
|
configuration and when --enable-asan is given.
|
2019-10-11 16:32:45 +02:00
|
|
|
""")
|
2019-07-03 00:43:29 +02:00
|
|
|
|
2018-06-21 09:30:20 +02:00
|
|
|
parser.add_argument("--enable-coverage", action="store_true",
|
2018-11-08 05:05:42 +01:00
|
|
|
help="Enable code coverage using gcov.")
|
2018-06-21 09:30:20 +02:00
|
|
|
|
2016-08-31 13:17:03 +02:00
|
|
|
parser.add_argument("-C", "--coverage", action="store_true",
|
2019-07-03 00:43:29 +02:00
|
|
|
help="Generate coverage reports. Implies "
|
2019-11-24 13:44:56 +01:00
|
|
|
"--enable_coverage.")
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-07-03 00:43:29 +02:00
|
|
|
parser.add_argument("--coverage-platform", action="append", default=[],
|
2018-11-24 02:24:19 +01:00
|
|
|
help="Plarforms to run coverage reports on. "
|
2019-07-03 00:43:29 +02:00
|
|
|
"This option may be used multiple times. "
|
|
|
|
"Default to what was selected with --platform.")
|
2018-11-24 02:24:19 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
parser.add_argument("--gcov-tool", default=None,
|
|
|
|
help="Path to the gcov tool to use for code coverage "
|
|
|
|
"reports")
|
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
parser.add_argument("--coverage-tool", choices=['lcov', 'gcovr'], default='lcov',
|
|
|
|
help="Tool to use to generate coverage report.")
|
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
return parser.parse_args()
|
|
|
|
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def log_info(filename):
|
2017-02-04 14:32:04 +01:00
|
|
|
filename = os.path.relpath(os.path.realpath(filename))
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.inline_logs:
|
2016-02-22 22:28:10 +01:00
|
|
|
info("{:-^100}".format(filename))
|
2017-01-10 22:15:02 +01:00
|
|
|
|
|
|
|
try:
|
|
|
|
with open(filename) as fp:
|
|
|
|
data = fp.read()
|
|
|
|
except Exception as e:
|
|
|
|
data = "Unable to read log data (%s)\n" % (str(e))
|
|
|
|
|
|
|
|
sys.stdout.write(data)
|
|
|
|
if log_file:
|
|
|
|
log_file.write(data)
|
2016-02-22 22:28:10 +01:00
|
|
|
info("{:-^100}".format(filename))
|
2015-07-17 21:03:52 +02:00
|
|
|
else:
|
2019-10-11 16:32:45 +02:00
|
|
|
info("\n\tsee: " + COLOR_YELLOW + filename + COLOR_NORMAL)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-10-29 11:54:02 +01:00
|
|
|
|
|
|
|
def log_info_file(instance):
|
|
|
|
build_dir = instance.build_dir
|
|
|
|
h_log = "{}/handler.log".format(build_dir)
|
|
|
|
b_log = "{}/build.log".format(build_dir)
|
|
|
|
v_log = "{}/valgrind.log".format(build_dir)
|
|
|
|
|
|
|
|
if os.path.exists(v_log) and "Valgrind" in instance.reason:
|
|
|
|
log_info("{}".format(v_log))
|
|
|
|
elif os.path.exists(h_log):
|
|
|
|
log_info("{}".format(h_log))
|
|
|
|
else:
|
|
|
|
log_info("{}".format(b_log))
|
|
|
|
|
|
|
|
|
2015-08-17 22:16:11 +02:00
|
|
|
def size_report(sc):
|
|
|
|
info(sc.filename)
|
2015-10-07 20:33:22 +02:00
|
|
|
info("SECTION NAME VMA LMA SIZE HEX SZ TYPE")
|
2015-10-07 23:25:51 +02:00
|
|
|
for i in range(len(sc.sections)):
|
|
|
|
v = sc.sections[i]
|
|
|
|
|
2015-10-07 20:33:22 +02:00
|
|
|
info("%-17s 0x%08x 0x%08x %8d 0x%05x %-7s" %
|
|
|
|
(v["name"], v["virt_addr"], v["load_addr"], v["size"], v["size"],
|
|
|
|
v["type"]))
|
2015-10-07 23:25:51 +02:00
|
|
|
|
2015-10-07 20:33:22 +02:00
|
|
|
info("Totals: %d bytes (ROM), %d bytes (RAM)" %
|
2017-12-05 21:28:44 +01:00
|
|
|
(sc.rom_size, sc.ram_size))
|
2015-08-17 22:16:11 +02:00
|
|
|
info("")
|
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
class CoverageTool:
|
|
|
|
""" Base class for every supported coverage tool
|
|
|
|
"""
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
self.gcov_tool = options.gcov_tool
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def factory(tool):
|
|
|
|
if tool == 'lcov':
|
|
|
|
return Lcov()
|
|
|
|
if tool == 'gcovr':
|
|
|
|
return Gcovr()
|
|
|
|
error("Unsupported coverage tool specified: {}".format(tool))
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def retrieve_gcov_data(intput_file):
|
|
|
|
if VERBOSE:
|
|
|
|
print("Working on %s" %intput_file)
|
|
|
|
extracted_coverage_info = {}
|
|
|
|
capture_data = False
|
|
|
|
capture_complete = False
|
|
|
|
with open(intput_file, 'r') as fp:
|
|
|
|
for line in fp.readlines():
|
|
|
|
if re.search("GCOV_COVERAGE_DUMP_START", line):
|
|
|
|
capture_data = True
|
|
|
|
continue
|
|
|
|
if re.search("GCOV_COVERAGE_DUMP_END", line):
|
|
|
|
capture_complete = True
|
|
|
|
break
|
|
|
|
# Loop until the coverage data is found.
|
|
|
|
if not capture_data:
|
|
|
|
continue
|
|
|
|
if line.startswith("*"):
|
|
|
|
sp = line.split("<")
|
|
|
|
if len(sp) > 1:
|
|
|
|
# Remove the leading delimiter "*"
|
|
|
|
file_name = sp[0][1:]
|
|
|
|
# Remove the trailing new line char
|
|
|
|
hex_dump = sp[1][:-1]
|
|
|
|
else:
|
|
|
|
continue
|
2018-10-08 16:19:41 +02:00
|
|
|
else:
|
|
|
|
continue
|
2019-11-23 16:47:33 +01:00
|
|
|
extracted_coverage_info.update({file_name:hex_dump})
|
|
|
|
if not capture_data:
|
|
|
|
capture_complete = True
|
|
|
|
return {'complete': capture_complete, 'data': extracted_coverage_info}
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def create_gcda_files(extracted_coverage_info):
|
|
|
|
if VERBOSE:
|
|
|
|
print("Generating gcda files")
|
|
|
|
for filename, hexdump_val in extracted_coverage_info.items():
|
|
|
|
# if kobject_hash is given for coverage gcovr fails
|
|
|
|
# hence skipping it problem only in gcovr v4.1
|
|
|
|
if "kobject_hash" in filename:
|
|
|
|
filename = (filename[:-4]) +"gcno"
|
|
|
|
try:
|
|
|
|
os.remove(filename)
|
|
|
|
except Exception:
|
|
|
|
pass
|
2018-10-08 16:19:41 +02:00
|
|
|
continue
|
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
with open(filename, 'wb') as fp:
|
|
|
|
fp.write(bytes.fromhex(hexdump_val))
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2018-10-08 16:19:41 +02:00
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
def generate(self, outdir):
|
|
|
|
for filename in glob.glob("%s/**/handler.log" % outdir, recursive=True):
|
|
|
|
gcov_data = self.__class__.retrieve_gcov_data(filename)
|
|
|
|
capture_complete = gcov_data['complete']
|
|
|
|
extracted_coverage_info = gcov_data['data']
|
|
|
|
if capture_complete:
|
|
|
|
self.__class__.create_gcda_files(extracted_coverage_info)
|
|
|
|
verbose("Gcov data captured: {}".format(filename))
|
|
|
|
else:
|
|
|
|
error("Gcov data capture incomplete: {}".format(filename))
|
|
|
|
|
|
|
|
with open(os.path.join(outdir, "coverage.log"), "a") as coveragelog:
|
|
|
|
ret = self._generate(outdir, coveragelog)
|
|
|
|
if ret == 0:
|
|
|
|
info("HTML report generated: {}".format(
|
|
|
|
os.path.join(outdir, "coverage", "index.html")))
|
|
|
|
|
|
|
|
|
|
|
|
class Lcov(CoverageTool):
|
2018-10-08 16:19:41 +02:00
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
def __init__(self):
|
|
|
|
super().__init__()
|
|
|
|
self.ignores = []
|
|
|
|
|
|
|
|
def add_ignore_file(self, pattern):
|
|
|
|
self.ignores.append('*' + pattern + '*')
|
2019-01-25 03:50:59 +01:00
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
def add_ignore_directory(self, pattern):
|
|
|
|
self.ignores.append(pattern + '/*')
|
|
|
|
|
|
|
|
def _generate(self, outdir, coveragelog):
|
2016-08-31 13:17:03 +02:00
|
|
|
coveragefile = os.path.join(outdir, "coverage.info")
|
|
|
|
ztestfile = os.path.join(outdir, "ztest.info")
|
2019-11-23 16:47:33 +01:00
|
|
|
subprocess.call(["lcov", "--gcov-tool", self.gcov_tool,
|
|
|
|
"--capture", "--directory", outdir,
|
|
|
|
"--rc", "lcov_branch_coverage=1",
|
|
|
|
"--output-file", coveragefile], stdout=coveragelog)
|
2016-08-31 13:17:03 +02:00
|
|
|
# We want to remove tests/* and tests/ztest/test/* but save tests/ztest
|
2019-11-23 16:47:33 +01:00
|
|
|
subprocess.call(["lcov", "--gcov-tool", self.gcov_tool, "--extract",
|
|
|
|
coveragefile,
|
2017-12-05 21:28:44 +01:00
|
|
|
os.path.join(ZEPHYR_BASE, "tests", "ztest", "*"),
|
2018-02-11 09:33:55 +01:00
|
|
|
"--output-file", ztestfile,
|
|
|
|
"--rc", "lcov_branch_coverage=1"], stdout=coveragelog)
|
|
|
|
|
2018-11-08 05:50:54 +01:00
|
|
|
if os.path.exists(ztestfile) and os.path.getsize(ztestfile) > 0:
|
2019-11-23 16:47:33 +01:00
|
|
|
subprocess.call(["lcov", "--gcov-tool", self.gcov_tool, "--remove",
|
|
|
|
ztestfile,
|
2018-02-11 09:33:55 +01:00
|
|
|
os.path.join(ZEPHYR_BASE, "tests/ztest/test/*"),
|
|
|
|
"--output-file", ztestfile,
|
|
|
|
"--rc", "lcov_branch_coverage=1"],
|
2019-11-23 16:47:33 +01:00
|
|
|
stdout=coveragelog)
|
2019-06-22 17:04:10 +02:00
|
|
|
files = [coveragefile, ztestfile]
|
2018-02-11 09:33:55 +01:00
|
|
|
else:
|
2019-06-22 17:04:10 +02:00
|
|
|
files = [coveragefile]
|
2018-02-11 09:33:55 +01:00
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
for i in self.ignores:
|
2017-12-05 21:28:44 +01:00
|
|
|
subprocess.call(
|
2019-11-23 16:47:33 +01:00
|
|
|
["lcov", "--gcov-tool", self.gcov_tool, "--remove",
|
|
|
|
coveragefile, i, "--output-file",
|
|
|
|
coveragefile, "--rc", "lcov_branch_coverage=1"],
|
2017-12-05 21:28:44 +01:00
|
|
|
stdout=coveragelog)
|
2018-02-11 09:33:55 +01:00
|
|
|
|
2019-11-23 16:47:33 +01:00
|
|
|
# The --ignore-errors source option is added to avoid it exiting due to
|
|
|
|
# samples/application_development/external_lib/
|
|
|
|
return subprocess.call(["genhtml", "--legend", "--branch-coverage",
|
|
|
|
"--ignore-errors", "source",
|
|
|
|
"-output-directory",
|
|
|
|
os.path.join(outdir, "coverage")] + files,
|
2018-02-11 09:33:55 +01:00
|
|
|
stdout=coveragelog)
|
2019-11-23 16:47:33 +01:00
|
|
|
|
|
|
|
|
|
|
|
class Gcovr(CoverageTool):
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
super().__init__()
|
|
|
|
self.ignores = []
|
|
|
|
|
|
|
|
def add_ignore_file(self, pattern):
|
|
|
|
self.ignores.append('.*' + pattern + '.*')
|
|
|
|
|
|
|
|
def add_ignore_directory(self, pattern):
|
|
|
|
self.ignores.append(pattern + '/.*')
|
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def _interleave_list(prefix, list):
|
|
|
|
tuple_list = [(prefix, item) for item in list]
|
|
|
|
return [item for sublist in tuple_list for item in sublist]
|
|
|
|
|
|
|
|
def _generate(self, outdir, coveragelog):
|
|
|
|
coveragefile = os.path.join(outdir, "coverage.json")
|
|
|
|
ztestfile = os.path.join(outdir, "ztest.json")
|
|
|
|
|
|
|
|
excludes = Gcovr._interleave_list("-e", self.ignores)
|
|
|
|
|
|
|
|
# We want to remove tests/* and tests/ztest/test/* but save tests/ztest
|
|
|
|
subprocess.call(["gcovr", "-r", ZEPHYR_BASE, "--gcov-executable",
|
|
|
|
self.gcov_tool, "-e", "tests/*"] + excludes +
|
|
|
|
["--json", "-o", coveragefile, outdir],
|
|
|
|
stdout=coveragelog)
|
|
|
|
|
|
|
|
subprocess.call(["gcovr", "-r", ZEPHYR_BASE, "--gcov-executable",
|
|
|
|
self.gcov_tool, "-f", "tests/ztest", "-e",
|
|
|
|
"tests/ztest/test/*", "--json", "-o", ztestfile,
|
|
|
|
outdir], stdout=coveragelog)
|
|
|
|
|
|
|
|
if os.path.exists(ztestfile) and os.path.getsize(ztestfile) > 0:
|
|
|
|
files = [coveragefile, ztestfile]
|
|
|
|
else:
|
|
|
|
files = [coveragefile]
|
|
|
|
|
|
|
|
subdir = os.path.join(outdir, "coverage")
|
|
|
|
os.makedirs(subdir, exist_ok=True)
|
|
|
|
|
|
|
|
tracefiles = self._interleave_list("--add-tracefile", files)
|
|
|
|
|
|
|
|
return subprocess.call(["gcovr", "-r", ZEPHYR_BASE, "--html",
|
|
|
|
"--html-details"] + tracefiles +
|
|
|
|
["-o", os.path.join(subdir, "index.html")],
|
|
|
|
stdout=coveragelog)
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
def get_generator():
|
|
|
|
if options.ninja:
|
|
|
|
generator_cmd = "ninja"
|
|
|
|
generator = "Ninja"
|
|
|
|
else:
|
|
|
|
generator_cmd = "make"
|
|
|
|
generator = "Unix Makefiles"
|
|
|
|
return generator_cmd, generator
|
|
|
|
|
|
|
|
|
|
|
|
def export_tests(filename, tests):
|
|
|
|
with open(filename, "wt") as csvfile:
|
|
|
|
fieldnames = ['section', 'subsection', 'title', 'reference']
|
|
|
|
cw = csv.DictWriter(csvfile, fieldnames, lineterminator=os.linesep)
|
|
|
|
for test in tests:
|
|
|
|
data = test.split(".")
|
2019-11-18 17:16:21 +01:00
|
|
|
if len(data) > 1:
|
|
|
|
subsec = " ".join(data[1].split("_")).title()
|
|
|
|
rowdict = {
|
|
|
|
"section": data[0].capitalize(),
|
|
|
|
"subsection": subsec,
|
|
|
|
"title": test,
|
|
|
|
"reference": test
|
|
|
|
}
|
|
|
|
cw.writerow(rowdict)
|
|
|
|
else:
|
|
|
|
info("{} can't be exported".format(test))
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
|
|
|
|
def native_and_unit_first(a, b):
|
|
|
|
if a[0].startswith('unit_testing'):
|
|
|
|
return -1
|
|
|
|
if b[0].startswith('unit_testing'):
|
|
|
|
return 1
|
|
|
|
if a[0].startswith('native_posix'):
|
|
|
|
return -1
|
|
|
|
if b[0].startswith('native_posix'):
|
|
|
|
return 1
|
|
|
|
if a[0].split("/",1)[0].endswith("_bsim"):
|
|
|
|
return -1
|
|
|
|
if b[0].split("/",1)[0].endswith("_bsim"):
|
|
|
|
return 1
|
|
|
|
|
|
|
|
return (a > b) - (a < b)
|
|
|
|
|
|
|
|
|
|
|
|
run_individual_tests = None
|
|
|
|
options = None
|
2015-08-17 22:16:11 +02:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
def main():
|
2015-07-31 21:25:22 +02:00
|
|
|
start_time = time.time()
|
2019-06-22 17:04:10 +02:00
|
|
|
global VERBOSE, log_file
|
2017-12-30 19:01:45 +01:00
|
|
|
global options
|
2018-07-12 16:25:22 +02:00
|
|
|
global run_individual_tests
|
2019-07-03 19:19:29 +02:00
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
options = parse_arguments()
|
2015-08-17 22:16:11 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
if options.generate_hardware_map:
|
|
|
|
from serial.tools import list_ports
|
|
|
|
serial_devices = list_ports.comports()
|
|
|
|
filtered = []
|
|
|
|
for d in serial_devices:
|
2019-11-13 18:11:32 +01:00
|
|
|
if d.manufacturer in ['ARM', 'SEGGER', 'MBED', 'STMicroelectronics',
|
2019-11-20 20:59:56 +01:00
|
|
|
'Atmel Corp.', 'Texas Instruments',
|
|
|
|
'Silicon Labs', 'NXP Semiconductors']:
|
2019-11-13 18:11:32 +01:00
|
|
|
# TI XDS110 can have multiple serial devices for a single board
|
|
|
|
# assume endpoint 0 is the serial, skip all others
|
|
|
|
if d.manufacturer == 'Texas Instruments' and not d.location.endswith('0'):
|
|
|
|
continue
|
2019-06-22 17:04:10 +02:00
|
|
|
s_dev = {}
|
|
|
|
s_dev['platform'] = "unknown"
|
|
|
|
s_dev['id'] = d.serial_number
|
|
|
|
s_dev['serial'] = d.device
|
|
|
|
s_dev['product'] = d.product
|
|
|
|
if s_dev['product'] in ['DAPLink CMSIS-DAP', 'MBED CMSIS-DAP']:
|
|
|
|
s_dev['runner'] = "pyocd"
|
2019-11-20 20:59:56 +01:00
|
|
|
elif s_dev['product'] in ['J-Link', 'J-Link OB']:
|
2019-11-13 14:21:12 +01:00
|
|
|
s_dev['runner'] = "jlink"
|
|
|
|
elif s_dev['product'] in ['STM32 STLink']:
|
|
|
|
s_dev['runner'] = "openocd"
|
2019-11-13 18:11:32 +01:00
|
|
|
elif s_dev['product'].startswith('XDS110'):
|
|
|
|
s_dev['runner'] = "openocd"
|
2019-06-22 17:04:10 +02:00
|
|
|
else:
|
|
|
|
s_dev['runner'] = "unknown"
|
|
|
|
s_dev['available'] = True
|
|
|
|
s_dev['connected'] = True
|
|
|
|
filtered.append(s_dev)
|
|
|
|
else:
|
|
|
|
print("Unsupported device (%s): %s" %(d.manufacturer, d))
|
|
|
|
|
|
|
|
if os.path.exists(options.generate_hardware_map):
|
|
|
|
# use existing map
|
|
|
|
|
|
|
|
with open(options.generate_hardware_map, 'r') as yaml_file:
|
|
|
|
hwm = yaml.load(yaml_file, Loader=yaml.FullLoader)
|
|
|
|
# disconnect everything
|
|
|
|
for h in hwm:
|
|
|
|
h['connected'] = False
|
2019-11-21 19:14:02 +01:00
|
|
|
h['serial'] = None
|
2019-06-22 17:04:10 +02:00
|
|
|
|
|
|
|
for d in filtered:
|
|
|
|
for h in hwm:
|
|
|
|
if d['id'] == h['id'] and d['product'] == h['product']:
|
|
|
|
print("Already in map: %s (%s)" %(d['product'], d['id']))
|
|
|
|
h['connected'] = True
|
|
|
|
h['serial'] = d['serial']
|
|
|
|
d['match'] = True
|
|
|
|
|
|
|
|
new = list(filter(lambda n: not n.get('match', False), filtered))
|
|
|
|
hwm = hwm + new
|
|
|
|
|
|
|
|
#import pprint
|
|
|
|
#pprint.pprint(hwm)
|
|
|
|
with open(options.generate_hardware_map, 'w') as yaml_file:
|
|
|
|
yaml.dump(hwm, yaml_file, default_flow_style=False)
|
|
|
|
|
|
|
|
|
|
|
|
else:
|
|
|
|
# create new file
|
|
|
|
with open(options.generate_hardware_map, 'w') as yaml_file:
|
|
|
|
yaml.dump(filtered, yaml_file, default_flow_style=False)
|
|
|
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
2019-06-18 18:37:46 +02:00
|
|
|
if options.west_runner and not options.west_flash:
|
|
|
|
error("west-runner requires west-flash to be enabled")
|
|
|
|
sys.exit(1)
|
|
|
|
|
2019-07-09 23:21:30 +02:00
|
|
|
if options.west_flash and not options.device_testing:
|
|
|
|
error("west-flash requires device-testing to be enabled")
|
|
|
|
sys.exit(1)
|
|
|
|
|
2018-06-21 09:30:20 +02:00
|
|
|
if options.coverage:
|
|
|
|
options.enable_coverage = True
|
2019-11-23 23:25:36 +01:00
|
|
|
|
|
|
|
if not options.coverage_platform:
|
|
|
|
options.coverage_platform = options.platform
|
2018-06-21 09:30:20 +02:00
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.size:
|
|
|
|
for fn in options.size:
|
2016-11-29 21:21:59 +01:00
|
|
|
size_report(SizeCalculator(fn, []))
|
2015-08-17 22:16:11 +02:00
|
|
|
sys.exit(0)
|
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
VERBOSE += options.verbose
|
2019-06-22 17:04:10 +02:00
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.log_file:
|
|
|
|
log_file = open(options.log_file, "w")
|
2018-08-17 13:56:05 +02:00
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.subset:
|
|
|
|
subset, sets = options.subset.split("/")
|
2017-05-14 03:31:53 +02:00
|
|
|
if int(subset) > 0 and int(sets) >= int(subset):
|
2017-12-05 21:28:44 +01:00
|
|
|
info("Running only a subset: %s/%s" % (subset, sets))
|
2017-05-14 03:31:53 +02:00
|
|
|
else:
|
2017-12-30 19:01:45 +01:00
|
|
|
error("You have provided a wrong subset value: %s." % options.subset)
|
2017-05-14 03:31:53 +02:00
|
|
|
return
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
# Cleanup
|
|
|
|
|
|
|
|
if options.no_clean or options.only_failed or options.test_only:
|
|
|
|
if os.path.exists(options.outdir):
|
|
|
|
info("Keeping artifacts untouched")
|
|
|
|
elif os.path.exists(options.outdir):
|
|
|
|
for i in range(1,100):
|
|
|
|
new_out = options.outdir + ".{}".format(i)
|
|
|
|
if not os.path.exists(new_out):
|
|
|
|
info("Renaming output directory to {}".format(new_out))
|
|
|
|
shutil.move(options.outdir, new_out)
|
|
|
|
break
|
|
|
|
#shutil.rmtree("%s.old" %options.outdir)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if not options.testcase_root:
|
|
|
|
options.testcase_root = [os.path.join(ZEPHYR_BASE, "tests"),
|
2016-04-08 20:52:13 +02:00
|
|
|
os.path.join(ZEPHYR_BASE, "samples")]
|
|
|
|
|
2019-12-09 21:23:43 +01:00
|
|
|
if options.show_footprint or options.compare_report or options.release:
|
|
|
|
options.enable_size_report = True
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
suite = TestSuite(options.board_root, options.testcase_root, options.outdir)
|
|
|
|
suite.add_testcases()
|
|
|
|
suite.add_configurations()
|
|
|
|
|
|
|
|
if options.device_testing:
|
|
|
|
if options.hardware_map:
|
|
|
|
suite.load_hardware_map(options.hardware_map)
|
|
|
|
if not options.platform:
|
|
|
|
options.platform = []
|
|
|
|
for platform in suite.connected_hardware:
|
|
|
|
if platform['connected']:
|
|
|
|
options.platform.append(platform['platform'])
|
|
|
|
|
|
|
|
elif options.device_serial: #back-ward compatibility
|
|
|
|
if options.platform and len(options.platform) == 1:
|
|
|
|
suite.load_hardware_map_from_cmdline(options.device_serial,
|
|
|
|
options.platform[0])
|
|
|
|
else:
|
|
|
|
error("""When --device-testing is used with --device-serial, only one
|
|
|
|
platform is allowed""")
|
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if suite.load_errors:
|
2018-04-10 20:32:51 +02:00
|
|
|
sys.exit(1)
|
|
|
|
|
2018-02-24 15:32:14 +01:00
|
|
|
if options.list_tags:
|
|
|
|
tags = set()
|
2019-06-22 17:04:10 +02:00
|
|
|
for _, tc in suite.testcases.items():
|
2018-02-24 15:32:14 +01:00
|
|
|
tags = tags.union(tc.tags)
|
|
|
|
|
|
|
|
for t in tags:
|
|
|
|
print("- {}".format(t))
|
|
|
|
|
|
|
|
return
|
|
|
|
|
2018-06-01 16:51:08 +02:00
|
|
|
if options.export_tests:
|
|
|
|
cnt = 0
|
2019-11-18 16:49:17 +01:00
|
|
|
tests = suite.get_all_tests()
|
2018-06-01 16:51:08 +02:00
|
|
|
export_tests(options.export_tests, tests)
|
|
|
|
return
|
|
|
|
|
2018-07-12 16:25:22 +02:00
|
|
|
run_individual_tests = []
|
2018-06-01 16:51:08 +02:00
|
|
|
|
2018-07-12 16:25:22 +02:00
|
|
|
if options.test:
|
|
|
|
run_individual_tests = options.test
|
|
|
|
|
2019-12-01 19:55:11 +01:00
|
|
|
if options.list_tests or options.test_tree or options.list_test_duplicates or options.sub_test:
|
2019-06-14 19:45:34 +02:00
|
|
|
cnt = 0
|
2019-11-18 16:49:17 +01:00
|
|
|
all_tests = suite.get_all_tests()
|
2019-06-14 19:45:34 +02:00
|
|
|
|
2019-11-18 17:16:21 +01:00
|
|
|
if options.list_test_duplicates:
|
|
|
|
import collections
|
|
|
|
dupes = [item for item, count in collections.Counter(all_tests).items() if count > 1]
|
|
|
|
if dupes:
|
|
|
|
print("Tests with duplicate identifiers:")
|
|
|
|
for dupe in dupes:
|
|
|
|
print("- {}".format(dupe))
|
|
|
|
for dc in suite.get_testcase(dupe):
|
|
|
|
print(" - {}".format(dc))
|
|
|
|
else:
|
|
|
|
print("No duplicates found.")
|
|
|
|
return
|
|
|
|
|
2018-07-12 16:25:22 +02:00
|
|
|
if options.sub_test:
|
2019-11-18 19:22:56 +01:00
|
|
|
for st in options.sub_test:
|
|
|
|
subtests = suite.get_testcase(st)
|
|
|
|
for sti in subtests:
|
|
|
|
run_individual_tests.append(sti.name)
|
|
|
|
|
2018-07-12 16:25:22 +02:00
|
|
|
if run_individual_tests:
|
|
|
|
info("Running the following tests:")
|
2019-11-18 16:49:17 +01:00
|
|
|
for test in run_individual_tests:
|
|
|
|
print(" - {}".format(test))
|
2018-07-12 16:25:22 +02:00
|
|
|
else:
|
|
|
|
info("Tests not found")
|
|
|
|
return
|
|
|
|
|
2019-12-01 19:55:11 +01:00
|
|
|
elif options.list_tests or options.test_tree:
|
|
|
|
if options.test_tree:
|
|
|
|
testsuite = Node("Testsuite")
|
|
|
|
samples = Node("Samples", parent=testsuite)
|
|
|
|
tests = Node("Tests", parent=testsuite)
|
|
|
|
|
2019-12-01 19:24:33 +01:00
|
|
|
for test in sorted(all_tests):
|
2018-07-12 16:25:22 +02:00
|
|
|
cnt = cnt + 1
|
2019-12-01 19:55:11 +01:00
|
|
|
if options.list_tests:
|
|
|
|
print(" - {}".format(test))
|
|
|
|
|
|
|
|
if options.test_tree:
|
|
|
|
if test.startswith("sample."):
|
|
|
|
sec = test.split(".")
|
|
|
|
area = find(samples, lambda node: node.name == sec[1] and node.parent == samples)
|
|
|
|
if not area:
|
|
|
|
area = Node(sec[1], parent=samples)
|
|
|
|
|
|
|
|
t = Node(test, parent=area)
|
|
|
|
else:
|
|
|
|
sec = test.split(".")
|
|
|
|
area = find(tests, lambda node: node.name == sec[0] and node.parent == tests)
|
|
|
|
if not area:
|
|
|
|
area = Node(sec[0], parent=tests)
|
|
|
|
|
|
|
|
if area and len(sec) > 2:
|
|
|
|
subarea = find(area, lambda node: node.name == sec[1] and node.parent == area)
|
|
|
|
if not subarea:
|
|
|
|
subarea = Node(sec[1], parent=area)
|
|
|
|
|
|
|
|
t = Node(test, parent=subarea)
|
|
|
|
|
|
|
|
if options.list_tests:
|
|
|
|
print("{} total.".format(cnt))
|
|
|
|
|
|
|
|
if options.test_tree:
|
|
|
|
for pre, _, node in RenderTree(testsuite):
|
|
|
|
print("%s%s" % (pre, node.name))
|
|
|
|
|
|
|
|
|
2018-07-12 16:25:22 +02:00
|
|
|
return
|
2018-04-15 06:12:58 +02:00
|
|
|
|
2017-09-02 18:32:08 +02:00
|
|
|
discards = []
|
2019-06-14 19:45:34 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.only_failed:
|
|
|
|
suite.get_last_failed()
|
|
|
|
elif options.load_tests:
|
|
|
|
suite.load_from_file(options.load_tests)
|
|
|
|
elif options.test_only:
|
|
|
|
last_run = os.path.join(options.outdir, "sanitycheck.csv")
|
|
|
|
suite.load_from_file(last_run)
|
|
|
|
else:
|
|
|
|
discards = suite.apply_filters()
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2018-01-13 03:56:59 +01:00
|
|
|
if VERBOSE > 1 and discards:
|
2018-01-13 13:57:42 +01:00
|
|
|
# if we are using command line platform filter, no need to list every
|
|
|
|
# other platform as excluded, we know that already.
|
|
|
|
# Show only the discards that apply to the selected platforms on the
|
|
|
|
# command line
|
|
|
|
|
2016-02-22 22:28:10 +01:00
|
|
|
for i, reason in discards.items():
|
2018-01-13 13:57:42 +01:00
|
|
|
if options.platform and i.platform.name not in options.platform:
|
|
|
|
continue
|
2017-12-05 21:28:44 +01:00
|
|
|
debug(
|
|
|
|
"{:<25} {:<50} {}SKIPPED{}: {}".format(
|
|
|
|
i.platform.name,
|
2019-06-22 17:04:10 +02:00
|
|
|
i.testcase.name,
|
2017-12-05 21:28:44 +01:00
|
|
|
COLOR_YELLOW,
|
|
|
|
COLOR_NORMAL,
|
|
|
|
reason))
|
2017-09-02 18:32:08 +02:00
|
|
|
|
2019-06-14 19:45:34 +02:00
|
|
|
if options.report_excluded:
|
2019-11-18 16:49:17 +01:00
|
|
|
all_tests = suite.get_all_tests()
|
2019-06-14 19:45:34 +02:00
|
|
|
to_be_run = set()
|
2019-06-22 17:04:10 +02:00
|
|
|
for i,p in suite.instances.items():
|
|
|
|
to_be_run.update(p.testcase.cases)
|
2019-06-14 19:45:34 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if all_tests - to_be_run:
|
2019-06-14 19:45:34 +02:00
|
|
|
print("Tests that never build or run:")
|
2019-06-22 17:04:10 +02:00
|
|
|
for not_run in all_tests - to_be_run:
|
2019-06-14 19:45:34 +02:00
|
|
|
print("- {}".format(not_run))
|
|
|
|
|
|
|
|
return
|
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.subset:
|
2019-06-22 17:04:10 +02:00
|
|
|
#suite.instances = OrderedDict(sorted(suite.instances.items(),
|
|
|
|
# key=cmp_to_key(native_and_unit_first)))
|
2017-12-30 19:01:45 +01:00
|
|
|
subset, sets = options.subset.split("/")
|
2019-06-22 17:04:10 +02:00
|
|
|
total = len(suite.instances)
|
2017-05-14 03:31:53 +02:00
|
|
|
per_set = round(total / int(sets))
|
2017-12-05 21:28:44 +01:00
|
|
|
start = (int(subset) - 1) * per_set
|
2017-05-14 03:31:53 +02:00
|
|
|
if subset == sets:
|
|
|
|
end = total
|
|
|
|
else:
|
|
|
|
end = start + per_set
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
sliced_instances = islice(suite.instances.items(), start, end)
|
|
|
|
suite.instances = OrderedDict(sliced_instances)
|
|
|
|
|
2017-05-14 03:31:53 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.save_tests:
|
|
|
|
suite.csv_report(options.save_tests)
|
|
|
|
return
|
|
|
|
|
|
|
|
info("%d test configurations selected, %d configurations discarded due to filters." %
|
|
|
|
(len(suite.instances), len(discards)))
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-11-21 19:00:18 +01:00
|
|
|
if options.device_testing:
|
|
|
|
print("\nDevice testing on:")
|
|
|
|
for p in suite.connected_hardware:
|
|
|
|
if p['connected']:
|
|
|
|
print("%s (%s) on %s" %(p['platform'], p.get('id', None), p['serial']))
|
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.dry_run:
|
2019-06-22 17:04:10 +02:00
|
|
|
duration = time.time() - start_time
|
|
|
|
info("Completed in %d seconds" % (duration))
|
2015-07-17 21:03:52 +02:00
|
|
|
return
|
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
retries = options.retry_failed + 1
|
|
|
|
completed = 0
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
suite.update()
|
|
|
|
suite.start_time = start_time
|
2018-02-16 03:07:24 +01:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
while True:
|
|
|
|
completed += 1
|
2015-08-14 23:27:38 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if completed > 1:
|
|
|
|
info("%d Iteration:" %(completed ))
|
|
|
|
time.sleep(60) # waiting for the system to settle down
|
|
|
|
suite.total_done = suite.total_tests - suite.total_failed
|
|
|
|
suite.total_failed = 0
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
suite.execute()
|
|
|
|
info("", False)
|
|
|
|
|
|
|
|
retries = retries - 1
|
|
|
|
if retries == 0 or suite.total_failed == 0:
|
|
|
|
break
|
|
|
|
|
|
|
|
suite.misc_reports(options.compare_report, options.show_footprint,
|
|
|
|
options.all_deltas, options.footprint_threshold, options.last_metrics)
|
2015-07-17 21:03:52 +02:00
|
|
|
|
2019-11-24 13:42:06 +01:00
|
|
|
suite.duration = time.time() - start_time
|
|
|
|
suite.summary(options.disable_unrecognized_section_test)
|
|
|
|
|
2017-12-30 19:01:45 +01:00
|
|
|
if options.coverage:
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.gcov_tool is None:
|
2019-09-12 14:44:08 +02:00
|
|
|
use_system_gcov = False
|
2019-07-08 21:02:13 +02:00
|
|
|
|
|
|
|
for plat in options.coverage_platform:
|
2019-06-22 17:04:10 +02:00
|
|
|
ts_plat = suite.get_platform(plat)
|
2019-09-12 14:44:08 +02:00
|
|
|
if ts_plat and (ts_plat.type in {"native", "unit"}):
|
|
|
|
use_system_gcov = True
|
2019-07-08 21:02:13 +02:00
|
|
|
|
2019-09-12 14:44:08 +02:00
|
|
|
if use_system_gcov or "ZEPHYR_SDK_INSTALL_DIR" not in os.environ:
|
2019-07-08 21:02:13 +02:00
|
|
|
options.gcov_tool = "gcov"
|
|
|
|
else:
|
|
|
|
options.gcov_tool = os.path.join(os.environ["ZEPHYR_SDK_INSTALL_DIR"],
|
|
|
|
"i586-zephyr-elf/bin/i586-zephyr-elf-gcov")
|
|
|
|
|
2016-08-31 13:17:03 +02:00
|
|
|
info("Generating coverage files...")
|
2019-11-23 16:47:33 +01:00
|
|
|
coverage_tool = CoverageTool.factory(options.coverage_tool)
|
|
|
|
coverage_tool.add_ignore_file('generated')
|
|
|
|
coverage_tool.add_ignore_directory('tests')
|
|
|
|
coverage_tool.add_ignore_directory('samples')
|
|
|
|
coverage_tool.generate(options.outdir)
|
2016-08-31 13:17:03 +02:00
|
|
|
|
2019-06-22 17:04:10 +02:00
|
|
|
if options.device_testing:
|
|
|
|
print("\nHardware distribution summary:\n")
|
|
|
|
for p in suite.connected_hardware:
|
|
|
|
if p['connected']:
|
|
|
|
print("%s (%s): %d" %(p['platform'], p.get('id', None), p['counter']))
|
2015-07-17 21:03:52 +02:00
|
|
|
|
sanitycheck: Fix --log-file option
save_reports should be one of the last tasks executed because it
closes the log file. Withouth it, other functions that use debug
functions like info and error will try to write into a close file.
This fixes the following problem:
sanitycheck -x=USE_CCACHE=0 -p native_posix -T samples/hello_world/ -b
-N --log-file sanity.log
JOBS: 8
Building initial testcase list...
1 test configurations selected, 0 configurations discarded due to filters.
Adding tasks to the queue...
total complete: 1/ 1 100% skipped: 0, failed: 0
1 of 1 tests passed (100.00%), 0 failed, 0 skipped with 0 warnings in
2.91 seconds
Traceback (most recent call last):
File "./zephyr/scripts/sanitycheck", line 3866, in <module>
main()
File "./zephyr/scripts/sanitycheck", line 3854, in main
suite.summary(options.disable_unrecognized_section_test)
File "./zephyr/scripts/sanitycheck", line 2306, in summary
self.duration))
File "./zephyr/scripts/sanitycheck", line 432, in info
log_file.write(what + "\n")
ValueError: I/O operation on closed file.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-10-22 23:03:48 +02:00
|
|
|
suite.save_reports()
|
2019-06-22 17:04:10 +02:00
|
|
|
if suite.total_failed or (suite.warnings and options.warnings_as_errors):
|
|
|
|
sys.exit(1)
|
2017-12-05 21:28:44 +01:00
|
|
|
|
2015-07-17 21:03:52 +02:00
|
|
|
if __name__ == "__main__":
|
|
|
|
main()
|