No need to start the wq every time the mbox is initialized, it must be
done once for all the instances (it is shared by all the instances).
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The priority of workqueue responsible for rpmsg processing was too
low. Any cooperative task could cause rpmsg processing to be
slightly delayed.
Signed-off-by: Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Introduce a new RPMsg with static VRINGs backend. This new backend makes
easy to generate and use IPC instances backed by OpenAMP using the DT.
Each instance is defined in the DT as (for example):
ipc: ipc {
compatible = "zephyr,ipc-openamp-static-vrings";
shm = <&sram_ipc0>;
mboxes = <&mbox 0>, <&mbox 1>;
mbox-names = "tx", "rx";
role = "primary";
status = "okay";
};
It is then possible to register an send data through endpoints using the
IPC service APIs.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Extend the RPMsg structs to accommodate for the introduction of new
backends and contextually fix the ipc_rpmsg_static_vrings_mi backend
(the only user).
Rework also some comments and ipc_service glue code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
As know, an instance is the representation of a physical communication
channel between two domains / CPUs.
This communication channel must be usually known and initialized (that
is "opened") by both parties in the communication before a proper
communication can be instaurated using endpoints.
Depending on the backend and on the library / protocol used by the
backend, this "opening" can go through some handshaking or
synchronization procedure run by the parties that sometimes can be
blocking or time-consuming.
For example in the simplest case of a backend using OpenAMP, the remote
side of the communication is waiting for the local part to be up and
running by loop-waiting on some flag set in the shared memory by the
local party.
This is a blocking process so a particular attention must be paid to
where this is going to be placed in the backend code.
Currently it is only possible to have this synchronization procedure in
two points: (1) the init function of the instance, (2) during
ipc_service_register_endpoint().
It should be highly discouraged to put any blocking routine in the init
code, so (1) must be excluded. It is also frowned upon using the
endpoint registration function (2) because the synchronization is
something concerning the instance, not the single endpoints.
This patch is adding a new optional ipc_service_open_instance() function
that can be used to host the handshaking or synchronization code between
the two parties of the instance.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The IPC libraries will be used by several backends. Move the libraries
out in a new 'lib' directory.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
As part of the work to support multiple IPC instances / backends using
IPC service, the static vrings mi code must be reworked to resemble a
classic device driver.
Fix also the sample using it.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The IPC service code is currently assuming that only one IPC instance
does exist and the user can use the IPC service API to interface with
that singleton instance.
This is a huge limitation and this patch is trying to fix this
assumption introducing three major changes to the IPC service API:
- All the IPC instances are now supposed to be instantiated as a struct
device. A new test is introduced to be used as skeleton for all the
other backends.
- ipc_service_register_backend() is now removed (because multiple
backends are now supported at the same time).
- All the other ipc_service_*() functions are now taking a struct device
pointer as parameter to specify on which instance the user is going to
act and operate.
In this patch the documentation is also extended to better clarify the
terminology used.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Only one single IPC service backend is currently present: multi_instance
backend. This backend is heavily relying on the RPMsg multi_instance
code to instanciate and manage instances and endpoints. Samples exist
for both in the samples/subsys/ipc/ directory.
With this patch we are "unpacking" the RPMsg multi_service code to make
it more modular and reusable by different backends.
In particular we are re-organizing the code into two helper libraries:
an RPMsg library and a VRING / virtqueues static allocation library. At
the same time we rewrite the multi_instance backend to make fully use of
those new libraries and remove the old multi_instance sample.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The rpmsg_mi_configure_shm() function is not returning anything and it
is not marked as static. Fix this changing the return type to 'static
void'.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
When possible use 'size_t' for sizes and 'uintptr_t' for generic
addresses instead of relying on uint*_t types.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This patch is the first step to make the rpmsg_multi_instance usable in
a multi-core scenario.
The current driver is using a local driver variable (instance) to track
the number of allocated instances. This counter is practically used to
allocate to the instance the correct portion of the shared memory.
This is fundamentally wrong because this is assuming that it does exist
only one single shared memory region to split amongs all the allocated
instances. When the platform has more than one core this is obviously
not the case since each couple of cores are communicating using a
different memory region.
To solve this issue we introduce a new struct rpmsg_mi_ctx_shm_cfg that
is doing two things: (1) it's carrying the information about the shared
memory and (2) it's carrying an internal variable used to track the
instances allocated in that region. The same struct should be used every
time a new instance is allocated in the same shared memory region.
We also fix a problem with the current code where there is a race
between threads when accessing the instance variable, so this patch is
adding a serializing mutex.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For the instance configuration the rpmsg_multi_instance code is
currently using a set of configuration info coming from two different
sources: the rpsmg_mi_ctx_cfg struct and Kconfig.
This is not only confusing but it's preventing to configure the
instances using information not coming from Kconfig (for example if we
want to configure the instance using DT).
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The IPC drivers rpmsg_service and rpmsg_multi_instance are not
explicitly enabling the RX IPM channel when two different devices are
used for TX and RX. While this could be redundant for some IPM drivers,
in some cases the hardware needs to be enabled before using it.
Add the missing calls to ipm_set_enabled() for both the devices.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In the current code the naming of the
CONFIG_RPMSG_MULTI_INSTANCE_?_IPM_{TX,RX}_NAME symbol is 1-based. While
this is not currently an issue, it could easily become such if the
symbol is programmatically used as part of a preprocessor enumeration
(for example when using DT_INST_FOREACH_STATUS_OKAY(...) & co).
To avoid trouble, just make the index starting from 0 instead than 1.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Implementation of the backend for IPC Service.
Multi-instance RPMsg was used as a backend.
Signed-off-by: Marcin Jeliński <marcin.jelinski@nordicsemi.no>
IPC Service allow plugging in different transport backends.
Specifies a generic API that is implemented by the backend.
Signed-off-by: Marcin Jeliński <marcin.jelinski@nordicsemi.no>
This patch implements a service that adds multiple instances
capabilities to RPMsg.
Each instance is allocated a separate piece of shared memory.
Multiple instances provide independent message processing.
Each instance has its own work_q.
Signed-off-by: Marcin Jeliński <marcin.jelinski@nordicsemi.no>
The PRMsg service and backend had hardcoded and incorrectly named log
level. Created the Kconfig options for configuring log level of this
module.
Signed-off-by: Marek Porwisz <marek.porwisz@nordicsemi.no>
Remove support for the Musca-A board. This board is rarely used, few
are available and superceded by Musca-B and Musca-S.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This patch implements a service that adds multiendpoint
capabilities to RPMsg. Multiple endpoints are intended to be used
when multiple modules need services from a remote processor. Each
module may register one or more RPMsg endpoints.
The implementation separates backend from the service, what
allows to extend this module to support other topologies like
Linux <-> Zephyr.
Co-authored-by: Piotr Szkotak <piotr.szkotak@nordicsemi.no>
Signed-off-by: Hubert Miś <hubert.mis@nordicsemi.no>