Add Object Documentation

Adds the basic kernel objects' documentation describing the function of
tasks, fiber and interrupt service routines.
Adds the nanokernel objects' docuementation describing the function of
the most important nanokernel objects.
Adds the microkernel objects' documentation describing the function of
the most important microkernel objects.

Changes the index.rst file to include the Object Documentation.

Change-Id: Ib35d973cc3575a7ecc32c4ab175e05cb298e3306
Signed-off-by: Rodrigo Caballero <rodrigo.caballero.abraham@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit is contained in:
Rodrigo Caballero 2015-05-20 10:51:50 -05:00 committed by Anas Nashif
parent d6ebd1839d
commit 67405bb918
9 changed files with 2334 additions and 1 deletions

View file

@ -9,6 +9,7 @@
installation/installation.rst
collaboration/collaboration.rst
object/object.rst
doxygen/doxygen.rst

View file

@ -0,0 +1,681 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Generated by Microsoft Visio, SVG Export task_states.svg Page-1 -->
<svg
xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="11.6929in"
height="8.26772in"
viewBox="0 0 841.89 595.276"
xml:space="preserve"
color-interpolation-filters="sRGB"
class="st13"
id="svg2"
version="1.1"
inkscape:version="0.48.5 r10040"
sodipodi:docname="task_states.svg"><metadata
id="metadata152"><rdf:RDF><cc:Work
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1"
objecttolerance="10"
gridtolerance="10"
guidetolerance="10"
inkscape:pageopacity="0"
inkscape:pageshadow="2"
inkscape:window-width="1920"
inkscape:window-height="1017"
id="namedview150"
showgrid="false"
inkscape:zoom="2.1741589"
inkscape:cx="503.96094"
inkscape:cy="517.0783"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:current-layer="shape12-29" />
<v:documentProperties
v:langID="1033"
v:metric="true"
v:viewMarkup="false">
<v:userDefs>
<v:ud
v:nameU="msvSubprocessMaster"
v:prompt=""
v:val="VT4(Rectangle)" />
<v:ud
v:nameU="msvNoAutoConnect"
v:val="VT0(1):26" />
</v:userDefs>
</v:documentProperties>
<style
type="text/css"
id="style4">
.st1 {marker-end:url(#mrkr4-6);stroke:#70ad47;stroke-linecap:round;stroke-linejoin:round;stroke-width:2}
.st2 {fill:#70ad47;fill-opacity:1;stroke:#70ad47;stroke-opacity:1;stroke-width:0.52631578947368}
.st3 {fill:#5b9bd5;stroke:#c8c8c8;stroke-width:0.25}
.st4 {fill:#feffff;font-family:Calibri;font-size:0.916672em}
.st5 {visibility:visible}
.st6 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
.st7 {font-size:1em}
.st8 {marker-end:url(#mrkr4-51);stroke:#ed7d31;stroke-dasharray:6.75,6.75;stroke-linecap:round;stroke-linejoin:round;stroke-width:2.25}
.st9 {fill:#ed7d31;fill-opacity:1;stroke:#ed7d31;stroke-opacity:1;stroke-width:0.47169811320755}
.st10 {marker-end:url(#mrkr13-60);marker-start:url(#mrkr13-58);stroke:#41719c;stroke-linecap:round;stroke-linejoin:round;stroke-width:2}
.st11 {fill:#41719c;fill-opacity:1;stroke:#41719c;stroke-opacity:1;stroke-width:0.44247787610619}
.st12 {fill:#41719c;fill-opacity:1;stroke:#41719c;stroke-opacity:1;stroke-width:0.52631578947368}
.st13 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
</style>
<defs
id="Markers">
<g
id="lend4">
<path
d="M 2 1 L 0 0 L 2 -1 L 2 1 "
style="stroke:none"
id="path8" />
</g>
<marker
id="mrkr4-6"
class="st2"
v:arrowType="4"
v:arrowSize="1"
v:setback="3.8"
refX="-3.8"
orient="auto"
markerUnits="strokeWidth"
overflow="visible">
<use
xlink:href="#lend4"
transform="scale(-1.9,-1.9) "
id="use11" />
</marker>
<marker
id="mrkr4-51"
class="st9"
v:arrowType="4"
v:arrowSize="2"
v:setback="4.24"
refX="-4.24"
orient="auto"
markerUnits="strokeWidth"
overflow="visible">
<use
xlink:href="#lend4"
transform="scale(-2.12,-2.12) "
id="use14" />
</marker>
<g
id="lend13">
<path
d="M 3 1 L 0 0 L 3 -1 L 3 1 "
style="stroke:none"
id="path17" />
</g>
<marker
id="mrkr13-58"
class="st11"
v:arrowType="13"
v:arrowSize="2"
v:setback="6.6"
refX="6.6"
orient="auto"
markerUnits="strokeWidth"
overflow="visible">
<use
xlink:href="#lend13"
transform="scale(2.26) "
id="use20" />
</marker>
<marker
id="mrkr13-60"
class="st12"
v:arrowType="13"
v:arrowSize="1"
v:setback="5.7"
refX="-5.7"
orient="auto"
markerUnits="strokeWidth"
overflow="visible">
<use
xlink:href="#lend13"
transform="scale(-1.9,-1.9) "
id="use23" />
</marker>
</defs>
<defs
id="Filters">
<filter
id="filter_2">
<feGaussianBlur
stdDeviation="2"
id="feGaussianBlur27" />
</filter>
</defs>
<g
v:mID="0"
v:index="1"
v:groupContext="foregroundPage"
id="g29">
<v:userDefs>
<v:ud
v:nameU="msvThemeOrder"
v:val="VT0(0):26" />
</v:userDefs>
<title
id="title31">Page-1</title>
<v:pageProperties
v:drawingScale="0.0393701"
v:pageScale="0.0393701"
v:drawingUnits="24"
v:shadowOffsetX="8.50394"
v:shadowOffsetY="-8.50394" />
<v:layer
v:name="Flowchart"
v:index="0" />
<v:layer
v:name="Connector"
v:index="1" />
<g
id="shape16-1"
v:mID="16"
v:groupContext="shape"
v:layerMember="1"
transform="translate(155.906,-510.236)">
<title
id="title34">Dynamic connector</title>
<path
d="M0 602.36 L84.53 602.36"
class="st1"
id="path36" />
</g>
<g
id="group2-7"
transform="translate(120.472,-446.457)"
v:mID="2"
v:groupContext="group"
v:layerMember="0">
<v:custProps>
<v:cp
v:nameU="Cost"
v:lbl="Cost"
v:type="7"
v:format="@"
v:langID="1033" />
<v:cp
v:nameU="Duration"
v:lbl="Duration"
v:type="2"
v:langID="1033" />
<v:cp
v:nameU="Resources"
v:lbl="Resources"
v:langID="1033" />
</v:custProps>
<v:userDefs>
<v:ud
v:nameU="Scale"
v:val="VT0(1):26" />
<v:ud
v:nameU="visVersion"
v:val="VT0(15):26" />
<v:ud
v:nameU="AntiScale"
v:val="VT0(1):26" />
</v:userDefs>
<title
id="title39">Multi document</title>
<g
id="shape3-8"
v:mID="3"
v:groupContext="shape"
v:layerMember="0"
transform="translate(28.3465,-28.3465)">
<title
id="title42">Sheet.3</title>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path44" />
</g>
<g
id="shape4-10"
v:mID="4"
v:groupContext="shape"
v:layerMember="0"
transform="translate(14.1732,-14.1732)">
<title
id="title47">Sheet.4</title>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path49" />
</g>
<g
id="shape5-12"
v:mID="5"
v:groupContext="shape"
v:layerMember="0">
<title
id="title52">Sheet.5</title>
<desc
id="desc54">Runnable</desc>
<v:textBlock
v:margins="rect(2,2,2,2)"
v:tabSpace="42.5197" />
<v:textRect
cx="35.4331"
cy="568.701"
width="70.87"
height="31.8898" />
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path56" />
<text
x="14.25"
y="572"
class="st4"
v:langID="1033"
id="text58"><v:paragraph
v:horizAlign="1" /><v:tabList />Runnable</text>
</g>
</g>
<g
id="group7-15"
transform="translate(375.591,-446.457)"
v:mID="7"
v:groupContext="group"
v:layerMember="0">
<v:custProps>
<v:cp
v:nameU="Cost"
v:lbl="Cost"
v:type="7"
v:format="@"
v:langID="1033" />
<v:cp
v:nameU="Duration"
v:lbl="Duration"
v:type="2"
v:langID="1033" />
<v:cp
v:nameU="Resources"
v:lbl="Resources"
v:langID="1033" />
</v:custProps>
<v:userDefs>
<v:ud
v:nameU="Scale"
v:val="VT0(1):26" />
<v:ud
v:nameU="visVersion"
v:val="VT0(15):26" />
<v:ud
v:nameU="AntiScale"
v:val="VT0(1):26" />
</v:userDefs>
<title
id="title61">Multi document.7</title>
<g
id="shape8-16"
v:mID="8"
v:groupContext="shape"
v:layerMember="0"
transform="translate(28.3465,-28.3465)">
<title
id="title64">Sheet.8</title>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path66" />
</g>
<g
id="shape9-18"
v:mID="9"
v:groupContext="shape"
v:layerMember="0"
transform="translate(14.1732,-14.1732)">
<title
id="title69">Sheet.9</title>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path71" />
</g>
<g
id="shape10-20"
v:mID="10"
v:groupContext="shape"
v:layerMember="0">
<title
id="title74">Sheet.10</title>
<desc
id="desc76">Waiting</desc>
<v:textBlock
v:margins="rect(2,2,2,2)"
v:tabSpace="42.5197" />
<v:textRect
cx="35.4331"
cy="568.701"
width="70.87"
height="31.8898" />
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path78" />
<text
x="18.06"
y="572"
class="st4"
v:langID="1033"
id="text80"><v:paragraph
v:horizAlign="1" /><v:tabList />Waiting</text>
</g>
</g>
<g
id="shape11-23"
v:mID="11"
v:groupContext="shape"
v:layerMember="0"
transform="translate(120.472,-365.669)">
<title
id="title83">Document.11</title>
<desc
id="desc85">Suspended</desc>
<v:custProps>
<v:cp
v:nameU="Cost"
v:lbl="Cost"
v:type="7"
v:format="@"
v:langID="1033" />
<v:cp
v:nameU="Duration"
v:lbl="Duration"
v:type="2"
v:langID="1033" />
<v:cp
v:nameU="Resources"
v:lbl="Resources"
v:langID="1033" />
</v:custProps>
<v:userDefs>
<v:ud
v:nameU="Scale"
v:val="VT0(1):26" />
<v:ud
v:nameU="visVersion"
v:val="VT0(15):26" />
</v:userDefs>
<v:textBlock
v:margins="rect(2,2,2,2)"
v:tabSpace="42.5197" />
<v:textRect
cx="35.4331"
cy="568.701"
width="70.87"
height="31.8898" />
<g
id="shadow11-24"
v:groupContext="shadow"
v:shadowOffsetX="0.345598"
v:shadowOffsetY="-1.97279"
v:shadowType="1"
transform="matrix(1,0,0,1,0.345598,1.97279)"
class="st5">
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st6"
id="path88" />
</g>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path90" />
<text
x="10.83"
y="572"
class="st4"
v:langID="1033"
id="text92"><v:paragraph
v:horizAlign="1" /><v:tabList />Suspended</text>
</g>
<g
id="shape12-29"
v:mID="12"
v:groupContext="shape"
v:layerMember="0"
transform="translate(375.591,-365.669)">
<title
id="title95">Document.12</title>
<desc
id="desc97">Suspended &amp; Waiting</desc>
<v:custProps>
<v:cp
v:nameU="Cost"
v:lbl="Cost"
v:type="7"
v:format="@"
v:langID="1033" />
<v:cp
v:nameU="Duration"
v:lbl="Duration"
v:type="2"
v:langID="1033" />
<v:cp
v:nameU="Resources"
v:lbl="Resources"
v:langID="1033" />
</v:custProps>
<v:userDefs>
<v:ud
v:nameU="Scale"
v:val="VT0(1):26" />
<v:ud
v:nameU="visVersion"
v:val="VT0(15):26" />
</v:userDefs>
<v:textBlock
v:margins="rect(2,2,2,2)"
v:tabSpace="42.5197" />
<v:textRect
cx="35.4331"
cy="568.701"
width="70.87"
height="31.8898" />
<g
id="shadow12-30"
v:groupContext="shadow"
v:shadowOffsetX="0.345598"
v:shadowOffsetY="-1.97279"
v:shadowType="1"
transform="matrix(1,0,0,1,0.345598,1.97279)"
class="st5">
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st6"
id="path100" />
</g>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path102" />
<text
xml:space="preserve"
style="font-size:8.00001144px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Intel Clear;-inkscape-font-specification:Intel Clear"
x="5.6144228"
y="566.86139"
id="text3131"
sodipodi:linespacing="125%"><tspan
sodipodi:role="line"
id="tspan3133"
x="5.6144228"
y="566.86139"
style="font-size:10.40001488px;fill:#ffffff">Suspended &amp;</tspan><tspan
sodipodi:role="line"
x="5.6144228"
y="579.86139"
id="tspan3135"
style="font-size:10.40001488px;fill:#ffffff">Waiting</tspan></text>
</g>
<g
id="shape17-36"
v:mID="17"
v:groupContext="shape"
v:layerMember="1"
transform="translate(318.898,-503.15)">
<title
id="title109">Dynamic connector.17</title>
<path
d="M0 595.28 L10.63 595.28 L10.63 630.71 L49.09 630.71"
class="st1"
id="path111" />
</g>
<g
id="shape18-41"
v:mID="18"
v:groupContext="shape"
v:layerMember="1"
transform="translate(411.024,-444.685)">
<title
id="title114">Dynamic connector.18</title>
<path
d="M0 588.19 L0 613.63 L-255.12 613.63 L-255.12 595.79"
class="st1"
id="path116" />
</g>
<g
id="shape19-46"
v:mID="19"
v:groupContext="shape"
v:layerMember="1"
transform="translate(283.465,-487.205)">
<title
id="title119">Dynamic connector.19</title>
<path
d="M0 595.28 L0 614.76 L-82.59 614.76"
class="st8"
id="path121" />
</g>
<g
id="shape20-52"
v:mID="20"
v:groupContext="shape"
v:layerMember="1"
transform="translate(113.386,-467.717)">
<title
id="title124">Dynamic connector.20</title>
<path
d="M-6.11 595.28 L-6.47 595.28 L-20.25 595.28 L-20.25 676.06 L-4.31 676.06"
class="st10"
id="path126" />
</g>
<g
id="shape22-61"
v:mID="22"
v:groupContext="shape"
v:layerMember="1"
transform="translate(375.591,-379.843)">
<title
id="title129">Dynamic connector.22</title>
<path
d="M0 588.19 L-174.71 588.19"
class="st8"
id="path131" />
</g>
<g
id="shape21-66"
v:mID="21"
v:groupContext="shape"
v:layerMember="1"
transform="translate(446.457,-386.929)">
<title
id="title134">Dynamic connector.21</title>
<path
d="M13.2 595.28 L13.56 595.28 L37.46 595.28 L37.46 514.49 L25.57 514.49"
class="st10"
id="path136" />
</g>
<g
id="shape6-73"
v:mID="6"
v:groupContext="shape"
v:layerMember="0"
transform="translate(248.031,-481.89)">
<title
id="title139">Document</title>
<desc
id="desc141">Running</desc>
<v:custProps>
<v:cp
v:nameU="Cost"
v:lbl="Cost"
v:type="7"
v:format="@"
v:langID="1033" />
<v:cp
v:nameU="Duration"
v:lbl="Duration"
v:type="2"
v:langID="1033" />
<v:cp
v:nameU="Resources"
v:lbl="Resources"
v:langID="1033" />
</v:custProps>
<v:userDefs>
<v:ud
v:nameU="Scale"
v:val="VT0(1):26" />
<v:ud
v:nameU="visVersion"
v:val="VT0(15):26" />
</v:userDefs>
<v:textBlock
v:margins="rect(2,2,2,2)"
v:tabSpace="42.5197" />
<v:textRect
cx="35.4331"
cy="568.701"
width="70.87"
height="31.8898" />
<g
id="shadow6-74"
v:groupContext="shadow"
v:shadowOffsetX="0.345598"
v:shadowOffsetY="-1.97279"
v:shadowType="1"
transform="matrix(1,0,0,1,0.345598,1.97279)"
class="st5">
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st6"
id="path144" />
</g>
<path
d="M0 589.96 L0 552.76 L70.87 552.76 L70.87 589.96 A32.185 32.185 -180 0 0 35.43 589.96 A32.185 32.185 0 0 1 -0 589.96 Z"
class="st3"
id="path146" />
<text
x="17.04"
y="572"
class="st4"
v:langID="1033"
id="text148"><v:paragraph
v:horizAlign="1" /><v:tabList />Running</text>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 16 KiB

37
doc/object/object.rst Normal file
View file

@ -0,0 +1,37 @@
Object Documentation
####################
Use this information to understand how the different kernel objects of
Tiny Mountain function. The purpose of this section is to help you
understand the most important object of the operating system. In order
to help you navigate through the content, we have divided the objects
in :ref:`basicObjects`, :ref:`nanokernelObjects` and
:ref:`microkernelObjects` objects.
We strongly recommend that you start with the :ref:`basicObjects` before
moving on to the :ref:`nanokernelObjects` or the
:ref:`microkernelObjects`. Additionally, we have included some
:ref:`driverExamples` for better comprehension of the objects' function.
.. rubric:: Abbreviations
+---------------+-------------------------------------------------------------------+
| Abbreviations | Definition |
+===============+===================================================================+
| API | Application Program Interface: typically a defined set |
| | of routines and protocols for building software inputs and output |
| | mechanisms. |
+---------------+-------------------------------------------------------------------+
| ISR | Interrupt Service Routine |
+---------------+-------------------------------------------------------------------+
| IDT | Interrupt Descriptor Table |
+---------------+-------------------------------------------------------------------+
| XIP | eXecute In Place |
+---------------+-------------------------------------------------------------------+
.. toctree:: Table of Contents
:maxdepth: 2
object_basic.rst
object_microkernel.rst
object_nanokernel.rst

View file

@ -0,0 +1,26 @@
.. _basicObjects:
Execution Contexts
##################
Tasks, fibers and interrupt service routines serve as the basis of the
operating system functionality. The purpose of this section is to
describe how this execution contexts operate, their behavior and their
implementation. Using this information you should be able to understand
what each context is capable of, how it operates and where its limits
are.
This section does not replace the Application Program Interface
documentation but rather complements it. The examples should provide
you with enough insight to understand the functionality but are not
meant to replace the detailed in-code documentation.
.. toctree:: Table of Contents
:maxdepth: 2
object_basic_fibers.rst
object_basic_interrupts.rst
object_basic_tasks.rst

View file

@ -0,0 +1,140 @@
Fibers
######
A Tiny Mountain fiber is an execution thread and a lightweight
alternative to a task. It can use nanokernel objects but not
microkernel objects. A runnable fiber will preempt the execution of any
task but it will not preempt the execution of another fiber.
Defining Fibers
***************
A fiber is defined as a routine that takes two 32-bit values as
arguments and returns a void within the application, for example:
.. code-block:: c
void fiber ( int arg1, int arg2 );
.. note::
A pointer can be passed to a fiber as one of the parameters but it
must be cast to a 32-bit integer.
Unlike a nanokernel task, a fiber cannot be defined within the project
file.
Fibers can be written in assembly. How to code a fiber in assembly is
beyond the scope of this document.
Starting a Fiber
****************
A nanokernel fiber must be explicitly started by calling
:c:func:`fiber_fiber_start()` or :c:func:`task_fiber_start()` to create
and start a fiber. The function :c:func:`fiber_fiber_start()` creates
and starts a fiber from another fiber, while
:c:func:`task_fiber_start()` does so from a task. Both APIs use the
parameters *parameter1* and *parameter2* as *arg1* and *arg2* given to
the fiber . The full documentation on these APIs can be found in the
:ref:`code`.
When :c:func:`task_fiber_start()`is called from a task, the new fiber
will be immediately ready to run. The background task immediately stops
execution, yielding to the new fiber until the fiber calls a blocking
service that de-schedules it. If the fiber performs a return from the
routine in which it started, the fiber is terminated, and its stack can
then be reused or de-allocated.
Fiber Stack Definition
**********************
The fiber stack is used for local variables and for calling functions or
subroutines. Additionally, the first locations on the stack are used by
the kernel for the context control structure. Allocate or declare the
fiber stack prior to calling :c:func:`fiber_fiber_start()`. A fiber
stack can be any sort of buffer. In this example the fiber stack is
defined as an array of 32-bit integers:
.. code-block::cpp
int32_t process_stack[256];
The size of the fiber stack can be set freely. It is recommended to
start with a stack much larger than you think you need, say 1 KB for a
simple fiber, and then reduce it after testing the functionality of the
fiber to optimize memory usage. The number of local variables and of
function calls with large local variables determine the required stack
size.
Stopping a Fiber
****************
There are no APIs to stop or suspend a fiber. Only one API can influence
the scheduling of a fiber, :c:func:`fiber_yield()`. When a fiber yields
itself, the nanokernel checks for another runnable fiber of the same or
higher priority. If a fiber of the same priority or higher is found, a
context switch occurs. If no other fibers are ready to execute, or if
all the runnable fibers have a lower priority than the currently
running fiber, the nanokernel does not perform any scheduling allowing
the running fiber to continue. A task or an ISR cannot call
:c:func:`fiber_yield()`.
If a fiber executes lengthy computations that will introduce an
unacceptable delay in the scheduling of other fibers, it should yield
by placing a :c:func:`fiber_yield()` call within the loop of a
computational cannot call :c:func:`fiber_yield()`.
Scheduling Fibers
*****************
The fibers in Tiny Mountain are priority-scheduled. When several fibers
are ready to run, they run in the order of their priority. When more
than one fiber of the same priority is ready to run, they are ordered
by the time that each became runnable. Each fiber runs until it is
unscheduled by an invoked kernel service or until it terminates. Using
prioritized fibers, avoiding interrupts, and considering the interrupts
worst case arrival rate and cost allows Tiny Mountain to use a simple
rate-monotonic analysis techniques with the nanokernel. Using this
technique an application can meet its deadlines.
When an external event, handled by an ISR, marks a fiber runnable, the
scheduler inserts the fiber into the list of runnable fibers based on
its priority. The worst case delay after that point is the sum of the
maximum execution times between un-scheduling points of the earlier
runnable fibers of higher or equal priority.
The nanokernel provides three mechanisms to reduce the worst-case delay
for responding to an external event:
Moving Computation Processing to a Task
=======================================
Move the processing to a task to minimize the amount of computation that
is performed at the fiber level. This reduces the scheduling delay for
fibers because a task is preempted when an ISR makes a fiber that
handles the external event runnable.
Moving Code to Handle External Event to ISR
===========================================
Move the code to handle the external event into an ISR. The ISR is
executed immediately after the event is recognized, without waiting for
the other fibers in the queue to be unscheduled.
Adding Yielding Points to Fibers
================================
Add yielding points to fibers with :c:func:`fiber_yield()`. This service
un-schedules a fiber and places it at the end of the ready fiber list
of fibers with that priority. It allows other fibers at the same
priority to get to the head of the queue faster. If a fiber executes
code that will take some time, periodically call
:c:func:`fiber_yield()`. Multi-threading using blocking fibers is
effective in coding hard real-time applications.

View file

@ -0,0 +1,359 @@
Interrupt Service Routines
##########################
General Information
*******************
Interrupt Service Routines are execution threads that run in response to
a hardware or software interrupt. They will preempt the execution of
any task or fiber running at the time the interrupt occurs.
Consequently, ISRs react fastest to hardware. Routines in the
nanokernel wake up with a very low overhead.
.. warning::
ISRs prevent other parts of the system from running. Therefore,
all code in these routines should be confined to short, simple
routines.
.. todo:: Insert how an ISR can be installed both static and dynamic.
Both dynamic and static ISRs can be installed. See
`Installing a Dynamic ISR`_ and `Installing a Static ISR`_ for more
details. An ISR cannot be installed in the project file, only a task or
a driver initialization call can install an ISR.
When an ISR wakes up a fiber, there is only one context switch directly
to the fiber. When an ISR wakes up a task, there is first a context
switch to the nanokernel, and then another context switch to the task
in the microkernel.
Interrupt Stubs
***************
Interrupts stubs are small pieces of assembler code that connect your
ISR to the Interrupt Descriptor Table (IDT). The interrupt stub informs
the kernel when an interrupt is in progress, performs interrupt
controller specific work, invokes your ISR and informs the kernel when
the interrupt processing is complete. The stub address is registered in
the Interrupt Descriptor Table. The stub references your ISR and the
stubs can either be generated dynamically or statically.
Interrupt Service Routine APIs
******************************
The table lists the ISR Application Program Interfaces. There are a
number of calls that an ISR can use to switch between different
processing levels.
.. note::
Application Program Interfaces of the ISRs are architecture-
specific because they are implemented in the interrupt controller
device driver for that processor or board. The architecture specific
implementation can be found in the corresponding documentation for
each architecture.
+-------------------------+---------------------------+
| Call | Description |
+=========================+===========================+
| :c:func:`irq_enable()` | Enables a specific IRQ. |
+-------------------------+---------------------------+
| :c:func:`irq_disable()` | Disables a specific IRQ. |
+-------------------------+---------------------------+
| :c:func:`irq_lock()` | Locks out all interrupts. |
+-------------------------+---------------------------+
| :c:func:`irq_unlock()` | Unlocks all interrupts. |
+-------------------------+---------------------------+
Installing a Dynamic ISR
************************
Use :c:func:`irq_connect()` to install and connect an ISR stub
dynamically. :c:func:`irq_connect()` is processor-specific. There is no
API method to uninstall a dynamic ISR.
Installing a Static ISR
***********************
The contents of a static interrupt stub are complex and board specific.
They are generally created manually as part of the BSP. A stub is
installed statically into the Interrupt Descriptor Table using one of
the macros detailed in following table. The table lists the macros you
can use to identify and register your static ISRs into the Interrupt
Descriptor Table. The IA-32 interrupt descriptor allows for the setting
of the privilege level, DPL, at which the interrupt can be triggered.
Tiny Mountain assumes all device drivers are kernel mode (ring 0) as
opposed to user-mode (ring 3). Therefore, these macros always set the
DPL to 0.
The IDT Macros
==============
+--------------------------+-------------------------------------------------------------------------+
| Call | Description |
+==========================+=========================================================================+
| NANO_CPU_INT_REGISTER( ) | Use this macro to register a driver's |
| | interrupt |
| | handler statically when the vector number is known at compile time. |
+--------------------------+-------------------------------------------------------------------------+
| SYS_INT_REGISTER( ) | Use this macro to register a driver's |
| | interrupt handler statically when |
| | the vector number is not known at compile time but the priority and IRQ |
| | line are. The BSP is responsible for implementing this macro in board.h |
| | to generate a vector from the priority and IRQ line at compile time. |
| | The macro is intended to provide a level of abstraction between the BSP |
| | and the driver. |
+--------------------------+-------------------------------------------------------------------------+
Interrupt Descriptor Table
**************************
The Interrupt Descriptor Table (IDT) is a data structure that implements
an interrupt vector table used by the processor to determine the
correct response to interrupts and exceptions. To optimize boot
performance and increase security, Tiny Mountain implements targets
using a statically created Interrupt Descriptor Table, interrupt stubs
and exception stubs. A static Interrupt Descriptor Table improves boot
performance because:
* No CPU cycles are used to construct the Interrupt Descriptor Table
at boot up.
* No CPU cycles are used to create interrupt stubs at boot up.
* No CPU cycles are used to create exceptions stubs at run-time.
The statically created Interrupt Descriptor Table can still be updated
at run-time despite being write-protected. There may be instances where
updating the Interrupt Descriptor Table at run-time is required, for
example, in order to install dynamic interrupts. The decision of
whether a target implements dynamic or static interrupts is determined
at compile time automatically based on the configuration.
Securing the Interrupt Descriptor Table
***************************************
Typically the IDT resides in the data section. Enable the Section Write
Protection feature to move the Interrupt Descriptor Table to the rodata
section and to mark all pages of memory in which the Interrupt
Descriptor Table resides as read-only. Enabling the Section Write
Protection feature places dynamic interrupt stubs into the text section
protecting them. A system where execute in place, XIP, support is
enabled, assumes the text section and read-only data section reside in
read-only memory, such as flash memory or ROM. In this scenario dynamic
interrupt stubs are not possible. The Interrupt Descriptor Table cannot
be updated at runtime. Therefore enabling the Section Write Protection
feature blocks generating dynamic interrupt stubs and updating the
Interrupt Descriptor Table at runtime.
Note This implementation of XIP does not support a ROM-resident
Interrupt Descriptor Table. When the segmentation feature is enabled,
execution of code in the data segment is not allowed. If the
segmentation feature is enabled and section write protection is not
enabled, dynamic interrupt stubs move to the text section, but they are
still writable.
The following is an example of a dynamic interrupt stub for x86:
.. code-block:: c
static NANO_CPU_INT_STUB_DECL (deviceStub);
void deviceDriver (void)
{
.
.
.
nanoCpuIntConnect (deviceIRQ, devicePrio, deviceIntHandler,
deviceStub);
.
.
.
}
This feature is part of Tiny Mountain's enhanced security profile.
Working with ISRs
*****************
Triggering Interrupts
=====================
The processor starts up an ISR when a hardware interrupt is received.
When one of the interrupt pins of the processor core is triggered, the
processor jumps to the appropriate interrupt routine. To interface this
hardware event with software, Tiny Mountain allows you to attach an ISR
to the interrupt signal.
An ISR can interface with a fiber using the nanokernel Application
Program Interfaces. The ISR can wake up a task using the microkernel
synchronization objects, an event or invoking the event handler. The
nanokernel affords them the lowest startup overhead because ISRs are
triggered from the hardware level. No context switch is needed to start
up an ISR.
When an interrupt occurs, all fibers and all tasks wait until the
interrupt is handled. If an application is executing a task or a fiber
is running, it is interrupted until the ISR finishes.
An ISR implementation is typically very hardware-specific because it
interfaces directly with a hardware interrupt and starts to run because
of it. The details of how this happens are described in your processors
documentation.
Prototype your hardware-specific functionality in a task, before you
move it to the ISR code.
If an ISR calls a channel service with a signal action, any fiber
rescheduling resulting from this call is delayed until all interrupt
handlers terminate. Therefore, use only the nano_Isr Application
Program Interfaces, as these do not invoke the system kernel scheduler
for a signal action. Keep in mind that there is no need for a swap at
this point; the caller has the highest priority already. Once the last
stacked interrupt terminates, the nanokernel scheduler must be called
to verify if a swap from the task to a fiber is necessary.
An ISR must never call any blocking channel Application Program
Interface. It would block the current fiber and all other interrupt
handlers that are stacked below the ISR.
Tiny Mountain supports interrupt nesting. When an ISR is running, it can
be interrupted when a new interrupt is received.
Using Interrupt Service Routines
================================
If interrupts come in at high speed, parts of your code can be at the
ISR level. If code is at the interrupt level, it avoids a context
switch making it faster. If interrupts come in at low speeds, the ISR
should only wake up a fiber or a task. That fiber or task should do all
the processing, not the ISR, even if the task can be interrupted by
fibers and ISRs. Keep fibers and ISRs short to ensure predictability.
For example, take an application that implements an algorithm in an ISR.
Suppose the algorithm takes one second to finish calculating. The
application has a task in the background that interfaces with a host
machine to plot data on the screen. The task updates the screen image
five times per second to provide a smooth screen display. This
application as a whole does not behave predictably if an interrupt is
received. The ISR starts calculating for one second and causes an
unexpected delay. The same holds true if the algorithm is implemented
using a fiber. The user sees an interleaved screen output. This example
is extreme but it shows that short fibers and short ISRs make the
system more predictable.
Implementing Interrupt Service Routines
***************************************
Most processors require that ISRs be coded in assembler. To make the
implementation easier, several assembler macros are available to do the
most common jobs. Because the ISRs block all other processing, always
implement the actual handling of the interrupt in a fiber or a task.
Where to handle the interrupt is a design choice that must be made
while considering the performance of the processor and the frequency of
the interrupt.
Coordinating ISRs and Events
****************************
An ISR can send a signal from the nanokernel to the microkernel to
trigger an event. Your setup can work with an event handler, or without
one. If there is no event handler and your task is waiting for the
event, the ISR wakes up the task when it triggers the event. If you
have an event handler, the ISR triggers the event handler routine. This
event handler then determines if the task wakes up or not.
.. warning::
Implement or process a buffer in an event handler if you input
comes in at a high speed.
Command Packet Sets
*******************
A command packet set is a group of statically-allocated command packets.
A command packet is accessible to any application running in kernel
space. They are necessary when signaling a semaphore from an ISR via
:c:func:`Isr_sem_give()` since command packets are processed after the
ISR finishes. That makes stack-allocated command packets unsafe for
this purpose. A statically-allocated command packet is implicitly
released after being processed. Consequently, the operating system does
not track the use-status of any statically-allocated command packet.
There is a small but unavoidable risk of a command packet's processing
being incomplete before the ISR runs again and tries to reuse the
packet. To further minimize this risk Tiny Mountain introduces command
packet sets. Fundamentally, a command packet set is a simple ring
buffer. Retrieve command packets from the set using
:c:func:`cmdPktGet()`. Each command packet has to be processed in a
near-FIFO since no use-status checking is performed a packet is
retrieved. In order to minimize the risk of packet corruption from
premature reuse, drivers that have an ISR component should use their
own command packet set and not use a common set for many drivers.
Create a command packet set in global memory using:
.. code-block:: c
CMD_PKT_SET_INSTANCE(setVariableName, #ofCommandPacketsInSet);
Task Level Interrupt Processing
*******************************
The task level interrupt processing feature permits to service
interrupts at the task level, without having to develop kernel level
ISRs. The *MAX_NUM_TASK_DEVS* kernel configuration option specifies the
total number of devices needing task-level interrupt support.
The default setting of 0 disables the following interfaces:
:c:func:`task_irq_alloc()`, :c:func:`task_irq_free()`,
:c:func:`task_irq_ack()` and :c:func:`task_irq_test()`. Each device has
a well-known identifier in the range from 0 to *MAX_NUM_TASK_DEVS*-1.
Tiny Mountain allows kernel tasks to bind to devices at run-time by
calling :c:func:`task_irq_alloc()`. A task may bind itself to multiple
devices by calling this routine multiple times but a given device can
be bound to only a single task at any point in time. The registering
task specifies the device it wishes to use, the associated IRQ and
priority level for the device's interrupt. It gets the assigned
interrupt vector in return. The interrupt associated with the device is
enabled once the task has registered to use a device. Whenever the
device generates an interrupt, the kernel automatically runs an ISR
that disables the interrupt and records its occurrence.
The task associated with the device can use :c:func:`taskDevIntTest()`
to determine if the device's interrupt has occurred. Alternatively, it
can use :c:func:`task_irq_test_wait()` or
:c:func:`task_irq_test_wait_timeout()` to wait until an interrupt is
detected.
After the task took the appropriate action to service an interrupt
generated by the device, it calls :c:func:`task_irq_ack()` to re-enable
the device's interrupt. The task can call :c:func:`task_irq_free()` to
unbind itself from a device that it no longer wishes to use. If the
registered device needs change its priority level, it must first
unregister and then register again with the new priority. To provide
security against device misuse, a device should only be tested,
acknowledged, and deregistered by a task if that task registered the
device. Restrict which task can register a given device or use the
device after registration, at the shim layer.

View file

@ -0,0 +1,298 @@
Tasks
#####
Properties of Tasks
*******************
A Tiny Mountain task is an execution thread that implements part or all
of the application functionality using the Tiny Mountain objects
described in detail by the Nanokernel Objects and Microkernel Objects
documents. Tasks are cooperatively scheduled, and will run until they
explicitly yield, call a blocking interface or are preempted by a
higher priority task.
Defining Tasks
**************
Microkernel tasks are statically defined in the Tiny Mountain project
file (which has a file extension of .vpf). The number of tasks in a
project file is limited only by the available memory on the platform. A
task definition in the project file must specify its name, priority,
entry point, task group, and stack size. As shown below:
.. code-block:: console
% TASK NAME PRIO ENTRY STACK GROUPS
% ===============================================
TASK philTask 5 philDemo 1024 [EXE]
TASK phi1Task0 6 philEntry 1024 [PHI]
Task groups must be specified. If a task does not belong to any groups
an empty list can be specified; i.e. :literal:`[]`. A task can change
groups at runtime, but the project file defines the group the task
belongs to when it begins running. Task groups are statically
allocated, and need to be defined in the project file. For example, the
PHI group from the example above would be defined as:
.. code-block:: console
% TASKGROUP NAME
% ==============
TASKGROUP PHI
To write scalable and portable task code, observe the following
guidelines:
#. Define the task entry point prototype in the project file.
#. Use the C calling convention.
#. Use C linkage style.
.. note::
To maximize portability, use Tiny Mountain -defined objects, such
as memory maps or memory pools, instead of user-defined array
buffers.
Task Behavior
*************
When a task calls an API to operate on a Tiny Mountain object, it passes
an abstract object identifier called objectID. A task shall always
manipulate kernel data structures through the APIs and shall not
directly access the internals of any object, for example, the internals
of a semaphore or a FIFO.
Task Application Program Interfaces
***********************************
The task APIs allow starting, stopping, suspending, resuming, aborting,
changing its priority, changing its entry point, and changing
groups.This table lists all task- and task-group-related application
program interfaces. For more information on each of those application
program interfaces see the application program interfaces documentation.
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| **Call** | **Description** |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`scheduler_time_slice_set()` | Specifies the time slice period for round\-robin task scheduling. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_abort()` | Aborts a task. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_abort_handler_set()` | Installs or removes an abort handler. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_resume()` | Marks a task as runnable. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_entry_set()` | Sets a tasks entry point. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_priority_set()` | Sets a tasks priority. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_sleep()` | Marks a task as not runnable until a timeout expires. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_start()` | Starts processing a task. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_suspend()` | Marks all tasks in a group as not runnable. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_yield()` | Yields the CPU to an equal\-priority task. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_node_id_get()`, isr_node_id_get()` | Get the tasks node ID.From an ISR call :c:func:`isr_node_id_get()`, from a task, call :c:func:`task_node_id_get()`. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_abort()` | Aborts a group of tasks. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_join()` | Adds a task to a group. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_leave()` | Removes a task from a group. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_resume()` | Resumes processing of a group. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_start()` | Starts processing of a group. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_suspend()` | Marks all tasks in a group as not runnable. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_group_mask_get()`, :c:func:`isr_task_group_mask_get()` | Gets the tasks group type.From an ISR call :c:func:`isr_task_group_mask_get()`, from a task, call :c:func:`task_group_mask_get()`. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_id_get()`, :c:func:`isr_task_id_get()` | Gets the tasks ID.From an ISR call :c:func:`isr_task_id_get()`, from a task, call :c:func:`task_id_get()`. |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| :c:func:`task_priority_get()`, :c:func:`isr_task_priority_get()` | Gets the tasks priority.From an ISR call :c:func:`isr_task_priority_get()`, from a task, call :c:func:`task_priority_get()` |
+----------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------+
A task can find its own ID using :c:func:`task_id_get()`. The task's own
name can be used interchangeably as the ID, however since the task's
name is chosen by the user it can be changed. Using
:c:func:`task_id_get()` is the safest way to reference a tasks name.
.. todo:: Add high level information about other APIs.
Task Implementation
*******************
Use Tiny Mountain objects and routine calls to interface a task with
other tasks running in the system. For example, achieve cooperation
between tasks by using synchronization objects, such as resources and
semaphores, or by passing parameters from one task to another using a
data-passing object.
Task Stack
==========
The compiler uses the task stack to store local task variables and to
implement parameter-passing between functions. Static and global
variables do not use memory from the stack. For more information about
defining memory segments, and the defaults used for different variable
types, consult the documentation for your compiler.
Task States
===========
Each task has a task state that the scheduler uses to determine whether
it is ready to run. This figure shows the possible task states and the
possible transitions. The most usual transitions are green,
bidirectional transitions are blue and uncommon transitions are marked
orange.
.. figure:: figures/task_states.svg
:scale: 75 %
:alt: Possible Task States
Shows the possible states that a task might have and their transitions.
Starting and Stopping Tasks
---------------------------
Tasks in Tiny Mountain are started in one of three ways:
+ Automatically at boot time if it is assigned to the EXE task group.
+ Another task issues a :c:func:`task_start()` for the task.
+ Another task issues a :c:func:`task_group_start()` for any task
group the task belongs to..
The scheduler manages the execution of a task once it is running. If the
task performs a return from the routine that started it, the task
terminates and its stack can be reused. This ensures that the task
terminates safely and cleanly.
Automatically Starting Tasks
----------------------------
Starting tasks automatically at boot utilizes the Task Grouping concept.
The EXE group at boot time will put all tasks belonging to the group in
a runnable state immediately after the kernel boots up.
Tasks Starting Other Tasks
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. todo:: Add details on how to start a task from within another task.
Task Scheduling
---------------
Once started, a task is scheduled for execution by the microkernel until
one of the following occurs:
* A higher-priority task becomes ready to run.
* The task completes.
* The task's time slice expires and another runnable task of equal
priority exists.
* The task becomes non-runnable.
Task Completion
^^^^^^^^^^^^^^^
.. todo:: Add details on how tasks complete.
Task Priorities
^^^^^^^^^^^^^^^
Tiny Mountain offers a configurable number of task priority levels. The
number ranges from 0 to :literal:`NUM_TASK_PRIORITIES-1`. The lowest
priority level ( :literal:`NUM_TASK_PRIORITIES-1` is reserved for use
by the microkernel's idle task. The priority of tasks is assigned
during the build process based upon the task definition in the project
file. The priority can be changed at any time, by either the task
itself or by another task calling :c:func:`task_priority_set()`.
If a task of higher priority becomes runnable, the kernel saves the
current tasks context and runs the higher-priority task. It is also
possible for a tasks priority to be temporarily changed to prevent a
condition known as priority inversion.
Priority Preemption
-------------------
The microkernel uses a priority-based preemptive scheduling algorithm
where the highest-priority task that is ready to run, runs. When a task
with a higher priority becomes runnable, the running task is
unscheduled and the task of higher priority is started. This is the
principle of preemption.
Suspended Tasks
^^^^^^^^^^^^^^^
Tasks can suspend other tasks, or themselves, using
:c:func:`task_suspend()`. The task stays suspended until
:c:func:`task_resume()` or :c:func:`task_abort()` is called by another
task. Use :c:func:`task_abort()` and :c:func:`task_group_abort()` with
care, as none of the affected tasks may own or be using kernel objects
when they are called. The safest abort practice is for a task to abort
only itself.
Aborting a Task
---------------
Tasks can have an abort handler, C routines that run as a critical
section when a task is aborted. Since the routine runs as critical, it
cannot be preempted or unscheduled allowing the task to properly clean
up. Because of this, abort handlers cannot make kernel API calls.
To install an abort handler function use
:c:func:`task_abort_handler_set()`. This will bind the routine for
execution when :c:func:`task_abort()` is called, and run the abort
handler function immediately.
Time-Slicing
------------
Time-slicing, enabled through the :c:func:`scheduler_time_slice_set()`
function, can share a processor between multiple tasks with the same
priority. When enabled, the kernel preempts a task that has run for a
certain amount of time, the time slice, and schedules another runnable
task with the same priority. The sorting of tasks of equal priority
order is a fundamental microkernel scheduling concept and is not
limited to cases involving :c:func:`task_yield()`.
The same effect as time-slicing can be achieved using
:c:func:`task_yield()`. When this call is made, the current task
relinquishes the processor if another task of the same priority is
ready to run. The calling task returns to the queue of runnable tasks.
If no other task of the same priority is runnable, the task that called
:c:func:`task_yield()` continues running.
.. note::
:c:func:`task_yield()` sorts the tasks in FIFO order.
Task Context Switches
^^^^^^^^^^^^^^^^^^^^^
When a task swap occurs, Tiny Mountain saves the context of the task
that is swapped out and restores the context of the task that is
swapped in.

View file

@ -0,0 +1,455 @@
.. _microkernelObjects:
Microkernel Objects
###################
Section Scope
*************
This section provides an overview of the most important microkernel
objects, and their operation.
Each object contains a definition, a function description, and a table
of Application Program Interfaces (API) including the context that may
call them. Please refer to the API documentation for further details
regarding each objects functionality.
Microkernel FIFO Objects
************************
Definition
==========
Tiny Mountain FIFO object is defined in
:file:`include/microkernel/fifo.h` as a simple first-in, first-out
queue that handle small amounts of fixed size data. FIFO objects have a
buffer that stores a number of data transmits, and are the most
efficient way to pass small amounts of data between tasks. FIFO objects
are suitable for asynchronously transferring small amounts of data,
such as parameters, between tasks.
Function
========
FIFO objects store data in a statically allocated buffer defined within
the projects VPF file. The depth of the FIFO object buffer is only
limited by the available memory on the platform. Individual FIFO data
objects can be at most 40 bytes in size, and are stored in an ordered
first-come, first-serve basis, not by priority.
FIFO objects are asynchronous. When using a FIFO object, the sender can
add data even if the receiver is not ready yet. This only applies if
there is sufficient space on the buffer to store the sender's data.
FIFO objects are anonymous. The kernel object does not store the sender
or receiver identity. If the sender identification is required, it is
up to the caller to store that information in the data placed into the
FIFO. The receiving task can then check it. Alternatively, mailboxes
can be used to specify the sender and receiver identities.
FIFO objects read and write actions are always fixed-size block-based.
The width of each FIFO object block is specified in the project file.
If a task calls :c:func:`task_fifo_get()` and the call succeeds, then
the fixed number of bytes is copied from the FIFO object into the
addresses of the destination pointer.
Initialization
==============
FIFO objects are created by defining them in a project file, for example
:file:`projName.vpf`. Specify the name of the FIFO object, the width in
bytes of a single entry, the number of entries, and, if desired, the
location defined in the architecture file to be used for the FIFO. Use
the following syntax in the VPF file to define a FIFO:
.. code-block:: console
FIFO %name %depthNumEntries %widthBytes [ bufSegLocation ]
An example of a FIFO entry for use in the VPF file:
.. code-block:: console
% FIFO NAME DEPTH WIDTH
% ============================
FIFO FIFOQ 2 4
Application Program Interfaces
==============================
The FIFO object APIs allow to putting data on the queue, receiving data
from the queue, finding the number of messages in the queue, and
emptying the queue.
+----------------------------------------+-------------------------------------------------+
| **Call** | **Description** |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_put()` | Put data on a FIFO, and fail |
| | if the FIFO is full. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_put_wait()` | Put data on a FIFO, waiting |
| | for room in the FIFO. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_put_wait_timeout()` | Put data on a FIFO, waiting |
| | for room in the FIFO, or a time out. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_get()` | Get data off a FIFO, |
| | returning immediately if no data is available. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_get_wait()` | Get data off a FIFO, |
| | waiting until data is available. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_get_wait_timeout()` | Get data off a FIFO, |
| | waiting until data is available, or a time out. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_purge()` | Empty the FIFO buffer, and |
| | signal any waiting receivers with an error. |
+----------------------------------------+-------------------------------------------------+
| :c:func:`task_fifo_size_get()` | Read the number of filled |
| | entries in a FIFO. |
+----------------------------------------+-------------------------------------------------+
Pipe Objects
************
Definition
==========
Microkernel pipes are defined in :file:`kernel/microkernel/k_pipe.c`.
Pipes allow any task to put any amount of data in or out. Pipes are
conceptually similar to FIFO objects in that they communicate
anonymously in a time-ordered, first-in, first-out manner, to exchange
data between tasks. Like FIFO objects, pipes can have a buffer, but
un-buffered operation is also possible. The main difference between
FIFO objects and pipes is that pipes handle variable-sized data.
Function
========
Pipes accept and send variable-sized data, and can be configured to work
with or without a buffer. Buffered pipes are time-ordered. The incoming
data is stored on a first-come, first-serve basis in the buffer; it is
not sorted by priority.
Pipes have no size limit. The size of the data transfer and the size of
the buffer have no limit except for the available memory. Pipes allow
senders or receivers to perform partial read and partial write
operations.
Pipes support both synchronous and asynchronous operations. If a pipe is
unbuffered, the sender can asynchronously put data into the pipe, or
wait for the data to be received, and the receiver can attempt to
remove data from the pipe, or wait on the data to be available.
Buffered pipes are synchronous by design.
Pipes are anonymous. The pipe transfer does not identify the sender or
receiver. Alternatively, mailboxes can be used to specify the sender
and receiver identities.
Initialization
==============
A target pipe has to be defined in the project file, for example
:file:`projName.vpf`. Specify the name of the pipe, the size of the
buffer in bytes, and the memory location for the pipe buffer as defined
in the linker script. The buffers memory is allocated on the processor
that manages the pipe. Use the following syntax in the VPF file to
define a pipe:
.. code-block:: console
PIPE %name %buffersize [%bufferSegment]
An example of a pipe entry for use in the VPF file:
.. code-block:: console
% PIPE NAME BUFFERSIZE [BUFFER_SEGMENT]
% ===================================================
PIPE PIPE_ID 256
Application Program Interfaces
==============================
The pipes APIs allow to sending and receiving data to and from a pipe.
+----------------------------------------+----------------------------------------+
| **Call** | **Description** |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_put()` | Put data on a pipe |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_put_wait()` | Put data on a pipe with a delay. |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_put_wait_timeout()` | Put data on a pipe with a timed delay. |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_get()` | Put data on a pipe. |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_get_wait()` | Put data on a pipe with a delay. |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_get_wait_timeout()` | Put data on a pipe with a timed delay. |
+----------------------------------------+----------------------------------------+
| :c:func:`task_pipe_put_async()` | Put data on a pipe asynchronously. |
+----------------------------------------+----------------------------------------+
Mailbox Objects
***************
Definition
==========
A Tiny Mountain mailbox object is defined in include
:file:`/microkernel/mail.h`. Mailboxes are a flexible way to pass data
and for tasks to exchange messages.
Function
========
Each transfer within a mailbox can vary in size. The size of a data
transfer is only limited by the available memory on the platform.
Transmitted data is not buffered in the mailbox itself. Instead, the
buffer is either allocated from a memory pool block, or in block of
memory defined by the user.
Mailboxes can work synchronously and asynchronously. Asynchronous
mailboxes require the sender to allocate a buffer from a memory pool
block, while synchronous mailboxes will copy the sender data to the
receiver buffer.
The transfer contains one word of information that identifies either the
sender, or the receiver, or both. The sender task specifies the task it
wants to send to. The receiver task specifies the task it wants to
receive from. Then the mailbox checks the identity of the sender and
receiver tasks before passing the data.
Initialization
==============
A mailbox has to be defined in the project file, for example
:file:`projName.vpf`, which will specify the object type, and the name
of the mailbox. Use the following syntax in the VPF file to define a
Mailbox:
.. code-block:: console
MAILBOX %name
An example of a mailbox entry for use in the VPF file:
.. code-block:: console
% MAILBOX NAME
% =================
MAILBOX MYMBOX
Application Program Interfaces
==============================
Mailbox APIs provide flexibility and control for transferring data
between tasks.
+--------------------------------------------+---------------------------------------------------------------------+
| **Call** | **Description** |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_put()` | Attempt to put data in a |
| | mailbox, and fail if the receiver isnt waiting. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_put_wait()` | Puts data in a mailbox, |
| | and waits for it to be received. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_put_wait_timeout()` | Puts data in a mailbox, |
| | and waits for it to be received, with a timeout. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_put_async()` | Puts data in a mailbox |
| | asynchronously. |
| | |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_get()` | Gets k_msg message |
| | header information from a mailbox and gets mailbox data, or returns |
| | immediately if the sender isnt ready. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_get_wait()` | Gets k_msg message |
| | header information from a mailbox and gets mailbox data, and waits |
| | until the sender is ready with data. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_get_wait_timeout()` | Gets k_msg message |
| | header information from a mailbox and gets mailbox data, and waits |
| | until the sender is ready with a timeout. |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_data_get()` | Gets mailbox data and |
| | puts it in a buffer specified by a pointer. |
| | |
+--------------------------------------------+---------------------------------------------------------------------+
| :c:func:`task_mbox_data_get_async_block()` | Gets the mailbox data |
| | and puts it in a memory pool block. |
| | |
+--------------------------------------------+---------------------------------------------------------------------+
Semaphore Objects
*****************
Definition
==========
The microkernel semaphore is defined in
:file:`kernel/microkernel/k_sema.c` and are an implementation of
traditional counting semaphores. Semaphores are used to synchronize
application task activities.
Function
========
Semaphores are initialized by the system. At start the semaphore is
un-signaled and no task is waiting for it. Any task in the system can
signal a semaphore. Every signal increments the count value associated
with the semaphore. When several tasks wait for the same semaphore at
the same time, they are held in a prioritized list. If the semaphore is
signaled, the task with the highest priority is released. If more tasks
of that priority are waiting, the first one that requested the
semaphore wakes up. Other tasks can test the semaphore to see if it is
signaled. If not signaled, tasks can either wait, with or without a
timeout, until signaled or return immediately with a failed status.
Initialization
==============
A semaphore has to be defined in the project file, for example
:file:`projName.vpf`, which will specify the object type, and the name
of the semaphore. Use the following syntax in the VPF file to define a
semaphore::
.. code-block:: console
SEMA %name %node
An example of a semaphore entry for use in the VPF file:
.. code-block:: console
% SEMA NAME
% =================
SEMA SEM_TASKDONE
Application Program Interfaces
==============================
Semaphore APIs allow signaling a semaphore. They also provide means to
reset the signal count.
+----------------------------------------+---------------------------------------------------+
| **Call** | **Description** |
+----------------------------------------+---------------------------------------------------+
| :c:func:`isr_sem_give()` | Signal a semaphore from an ISR. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_give()` | Signal a semaphore from a task. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_take()` | Test a semaphore from a task. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_take_wait()` | Wait on a semaphore from a task. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_take_wait_timeout()` | Wait on a semaphore, with a timeout, from a task. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_group_reset()` | Sets a list of semaphores to zero. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_group_give()` | Signals a list of semaphores from a task. |
+----------------------------------------+---------------------------------------------------+
| :c:func:`task_sem_reset()` | Sets a semaphore to zero. |
+----------------------------------------+---------------------------------------------------+
Event Objects
*************
Definition
==========
Event objects are microkernel synchronization objects that tasks can
signal and test. Fibers and interrupt service routines may signal
events but they cannot test or wait on them. Use event objects for
situations in which multiple signals come in but only one test is
needed to reset the event. Events do not count signals like a semaphore
does due to their binary behavior. An event needs only one signal to be
available and only needs to be tested once to become clear and
unavailable.
Function
========
Events were designed for interrupt service routines and nanokernel
fibers that need to wake up a waiting task. The event signal can be
passed to a task to trigger an event test to RC_OK. Events are the
easiest and most efficient way to wake up a task to synchronize
operations between the two levels.
A feature of events are the event handlers. Event handlers are attached
to events. They perform simple processing in the nanokernel before a
context switch is made to a blocked task. This way, signals can be
interpreted before the system requires to reschedule a fiber or task.
Only one task may wait for an event. If a second task tests the same
event the call returns a fail. Use semaphores for multiple tasks to
wait on a signal from them.
Initialization
==============
An event has to be defined in the project file, :file:`projName.vpf`.
Specify the name of the event, the name of the processor node that
manages it, and its event-handler function. Use the following syntax:
.. code-block:: console
EVENT name handler
.. note::
In the project file, you can specify the name of the event and the
event handler, but not the event's number.
Define application events in the projects VPF file. Define the drivers
events in either the projects VPF file or a BSP-specific VPF file.
Application Program Interfaces
==============================
Event APIs allow signaling or testing an event (blocking or
non-blocking), and setting the event handler.
If the event is in a signaled state, the test function returns
successfully and resets the event to the non-signaled state. If the
event is not signaled at the time of the call, the test either reports
failure immediately in case of a non-blocking call, or blocks the
calling task into a until the event signal becomes available.
+------------------------------------------+------------------------------------------------------------+
| **Call** | **Description** |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`fiber_event_send()` | Signal an event from a fiber. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`task_event_set_handler()` | Installs or removes an event handler function from a task. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`task_event_send()` | Signal an event from a task. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`task_event_recv()` | Waits for an event signal. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`task_event_recv_wait()` | Waits for an event signal with a delay. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`task_event_recv_wait_timeout()` | Waits for an event signal with a delay and a timeout. |
+------------------------------------------+------------------------------------------------------------+
| :c:func:`isr_event_send()` | Signal an event from an ISR |
+------------------------------------------+------------------------------------------------------------+

View file

@ -0,0 +1,336 @@
.. _nanokernelObjects:
Nanokernel Objects
##################
Section Scope
*************
This section provides an overview of the most important nanokernel
objects. The information contained here is an aid to better understand
how Tiny Mountain operates at the nanokernel level.
Document Format
***************
Each object is broken off to its own section, containing a definition, a
functional description, the object initialization syntax, and a table
of Application Program Interfaces (APIs) with the context which may
call them. Please refer to the API documentation for further details
regarding each objects functionality.
Nanokernel FIFO
***************
Definition
==========
The FIFO object is defined in :file:`kernel/nanokernel/nano_fifo.c`.
This is a linked list of memory that allows the caller to store data of
any size. The data is stored in first-in-first-out order.
Function
========
Multiple processes can wait on the same FIFO object. Data is passed to
the first fiber that waited on the FIFO, and then to the background
task if no fibers are waiting. Through this mechanism the FIFO object
can synchronize or communicate between more than two contexts through
its API. Any ISR, fiber, or task can attempt to get data from a FIFO
without waiting on the data to be stored.
.. note::
The FIFO object reserves the first WORD in each stored memory
block as a link pointer to the next item. The size of the WORD
depends on the platform and can be 16 bit, 32 bit, etc.
Application Program Interfaces
==============================
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Context** | **Interfaces** |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Initialization** | :c:func:`nano_fifo_init()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Interrupt Service Routines** | :c:func:`nano_isr_fifo_get()`, :c:func:`nano_isr_fifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Fibers** | :c:func:`nano_fiber_fifo_get()`, :c:func:`nano_fiber_fifo_get_wait()`, :c:func:`nano_fiber_fifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Tasks** | :c:func:`nano_task_fifo_get()`, :c:func:`nano_task_fifo_get_wait()`, :c:func:`nano_task_fifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
Nanokernel LIFO Object
**********************
Definition
==========
<<<<<<< HEAD
The LIFO is defined in :file:`kernel/nanokernel/core/nano_lifo.c`. It
=====================================================================
The LIFO is defined in :file:`kernel/nanokernel/nano_lifo.c`. It
>>>>>>> 4fb7905fc834d5993a5613eee2c0f9bfc8d2943f
consists of a linked list of memory blocks that uses the first word in
each block as a next pointer. The data is stored in last-in-first-out
order.
Function
========
When a message is added to the LIFO, the data is stored at the head of
the list. Messages taken off the LIFO object are taken from the head.
The LIFO object requires the first 32-bit word to be empty in order to
maintain the linked list.
The LIFO object does not store information about the size of the
messages.
The LIFO object remembers one waiting context. When a second context
starts waiting for data from the same LIFO object, the first context
remains waiting and never reaches the runnable state.
Application Program Interfaces
==============================
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Context** | **Interfaces** |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Initialization** | :c:func:`nano_lifo_init()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Interrupt Service Routines** | :c:func:`nano_isr_lifo_get()`, :c:func:`nano_isr_lifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Fibers** | :c:func:`nano_fiber_lifo_get()`, :c:func:`nano_fiber_lifo_get_wait()`, :c:func:`nano_fiber_lifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Tasks** | :c:func:`nano_task_lifo_get()`, :c:func:`nano_task_lifo_get_wait()`, :c:func:`nano_task_lifo_put()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
Nanokernel Semaphore
********************
Definition
==========
The nanokernel semaphore is defined in
:file:`kernel/nanokernel/nano_sema.c` and implements a counting
semaphore that sends signals from one fiber to another.
Function
========
Nanokernel semaphore objects can be used from an ISR, a fiber, or the
background task. Interrupt handlers can use the nanokernels semaphore
object to reschedule a fiber waiting for the interrupt.
Only one context can wait on a semaphore at a time. The semaphore starts
with a count of 0 and remains that way if no context is pending on it.
Each 'give' operation increments the count by 1. Following multiple
'give' operations, the same number of 'take' operations can be
performed without the calling context having to wait on the semaphore.
Thus after n 'give' operations a semaphore can 'take' n times without
pending. If a second context waits for the same semaphore object, the
first context is lost and never wakes up.
Application Program Interfaces
==============================
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| Context | Interfaces |
+================================+========================================================================================================+
| **Initialization** | :c:func:`nano_sem_init()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Interrupt Service Routines** | :c:func:`nano_isr_sem_give()`, :c:func:`nano_isr_sem_take()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Fibers** | :c:func:`nano_fiber_sem_give()`, :c:func:`nano_fiber_sem_take()`, :c:func:`nano_fiber_sem_take_wait()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
| **Tasks** | :c:func:`nano_task_sem_give()`, :c:func:`nano_task_sem_take()`, :c:func:`nano_task_sem_take_wait()` |
+--------------------------------+--------------------------------------------------------------------------------------------------------+
Timer Objects
*************
Definition
==========
The timer objects is defined in :file:`kernel/nanokernel/nano_timer.c`
and implements digital counters that either increment or decrement at a
fixed frequency. Timers can be called from a task or fiber context.
Function
========
Only a fiber or task context can call timers. Timers can only be used in
a nanokernel if it is not part of a microkernel. Timers are optional in
nanokernel-only systems. The nanokernel timers are simple. The
:c:func:`nano_node_tick_delta()` routine is not reentrant and should
only be called from a single context, unless it is certain other
contexts are not using the elapsed timer.
Application Program Interfaces
==============================
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| **Context** | **Interface** |
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| **Initialization** | :c:func:`nano_timer_init()` |
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| **Interrupt Service Routines** | |
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| **Fibers** | :c:func:`nano_fiber_timer_test()`, :c:func:`nano_fiber_timer_wait()`, :c:func:`nano_fiber_timer_start()`, :c:func:`nano_fiber_timer_stop()` |
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
| **Tasks** | :c:func:`nano_task_timer_test()`, :c:func:`nano_task_timer_wait()`, :c:func:`nano_task_timer_start()`, :c:func:`nano_task_timer_stop()` |
+--------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------+
Semaphore, Timer, and Fiber Example
***********************************
The following example is pulled from the file:
:file:`samples/microkernel/apps/hello_world/src/hello.c`.
Example Code
============
.. code-block:: c
#include <nanokernel.h>
#include <nanokernel/cpu.h>
/* specify delay between greetings (in ms); compute equivalent in ticks */
#define SLEEPTIME
#define SLEEPTICKS (SLEEPTIME * CONFIG_TICKFREQ / 1000)
#define STACKSIZE 2000
char fiberStack[STACKSIZE];
struct nano_sem nanoSemTask;
struct nano_sem nanoSemFiber;
void fiberEntry (void)
{
struct nano_timer timer;
uint32_t data[2] = {0, 0};
nano_sem_init (&nanoSemFiber);
nano_timer_init (&timer, data);
while (1)
{
/* wait for task to let us have a turn */
nano_fiber_sem_take_wait (&nanoSemFiber);
/* say "hello" */
PRINT ("%s: Hello World!\n", __FUNCTION__);
/* wait a while, then let task have a turn */
nano_fiber_timer_start (&timer, SLEEPTICKS);
nano_fiber_timer_wait (&timer);
nano_fiber_sem_give (&nanoSemTask);
}
}
void main (void)
{
struct nano_timer timer;
uint32_t data[2] = {0, 0};
task_fiber_start (&fiberStack[0], STACKSIZE,
(nano_fiber_entry_t) fiberEntry, 0, 0, 7, 0);
nano_sem_init (&nanoSemTask);
nano_timer_init (&timer, data);
while (1)
{
/* say "hello" */
PRINT ("%s: Hello World!\n", __FUNCTION__);
/* wait a while, then let fiber have a turn */
nano_task_timer_start (&timer, SLEEPTICKS);
nano_task_timer_wait (&timer);
nano_task_sem_give (&nanoSemFiber);
/* now wait for fiber to let us have a turn */
nano_task_sem_take_wait (&nanoSemTask);
}
}
Step-by-Step Description
========================
A quick breakdown of the major objects in use
by this sample includes:
- One fiber, executing in the :c:func:`fiberEntry()` routine.
- The background task, executing in the :c:func:`main()` routine.
- Two semaphores (*nanoSemTask*, *nanoSemFiber*),
- Two timers:
+ One local to the fiber (timer)
+ One local to background task (timer)
First, the background task starts executing main(). The background task
calls task_fiber_start initializing and starting the fiber. Since a
fiber is available to be run, the background task is pre-empted and the
fiber begins running.
Execution jumps to fiberEntry. nanoSemFiber and the fiber-local timer
before dropping into the while loop, where it takes and waits on
nanoSemFiber. task_fiber_start.
The background task initializes nanoSemTask and the task-local timer.
The following steps repeat endlessly:
#. The background task execution begins at the top of the main while
loop and prints, “main: Hello World!”
#. The background task then starts a timer for SLEEPTICKS in the
future, and waits for that timer to expire.
#. Once the timer expires, it signals the fiber by giving the
nanoSemFiber semaphore, which in turn marks the fiber as runnable.
#. The fiber, now marked as runnable, pre-empts the background
process, allowing execution to jump to the fiber.
nano_fiber_sem_take_wait.
#. The fiber then prints, “fiberEntry: Hello World!” It starts a time
for SLEEPTICKS in the future and waits for that timer to expire. The
fiber is marked as not runnable, and execution jumps to the
background task.
#. The background task then takes and waits on the nanoSemTask
semaphore.
#. Once the timer expires, the fiber signals the background task by
giving the nanoSemFiber semaphore. The background task is marked as
runnable, but code execution continues in the fiber, since fibers
take priority over the background task. The fiber execution
continues to the top of the while loop, where it takes and waits on
nanoSemFiber. The fiber is marked as not runnable, and the
background task is scheduled.
#. The background task execution picks up after the call to
:c:func:`nano_task_sem_take_wait()`. It jumps to the top of the
while loop.