Transition the `user_data` field in `struct net_buf` to be a flexible
array member instead of a hardcoded array. Compile-time asserts are
introduced at the location of the intermediate struct usage to ensure
that the assumptions utilised in runtime code hold true.
The primary assumptions are that the two `user_data` fields exist at the
same memory offset, and that the instantiated struct size can be
determined from the generic struct size and the length of the user data.
`net_buf_id` and `pool_get_uninit` must now use manual address
calculations as the `__bufs` type is no longer the actual size of the
instantiated variable.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Store the `user_data` array size on both the pool and net_buf structs.
This will enable length validation once `user_data` fields are not
globally the same size. The new variables fit inside existing padding,
and therefore do not increase the size of either structure.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
net_buf_max_len() provides the maximum number of bytes which can be put
behind the data pointer. This provides a save alternative to using the
size field of the net_buf structure directly, which does not take the
reserved bytes (headroom) into account.
This commit also replaces the usage of the size field in places where
size got used directly. Code has not been adjusted when it is easy to
recognise that the buffer does not have any reserved bytes, which is the
case after allocation or reset. Same goes for the faulty usage by
net_pkt as exposed by the last commit and begin fixed by a separate
commit.
Even though it would be cleaner, I decided to not rename the size field
to e.g. __buf_size in order to keep the amount of code changes low.
Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
This enables to use net_buf_append_bytes without passing an allocator in
which case the code would attempt to use the net_buf_pool of the
original buffer.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Adds a new family of `struct net_buf` operations that remove data from
the end of the buffer.
The semantics of `net_buf_remove_mem` have been chosen to match those of
`net_buf_pull_mem`, i.e. the return value is a pointer to the memory
that was removed.
The opposite of this function, `net_buf_remove`, would need to return
the old end of the data buffer to be useful. However this value is
always an invalid target for reading or writing data to (It points to
the middle of unused data).The existance of the function would be
misleading, therefore it is not implemented.
Fixes#31069.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Use the core k_heap API pervasively within our tree instead of the
z_mem_pool wrapper that provided compatibility with the older mempool
implementation.
Almost all of this is straightforward swapping of one alloc/free call
for another. In a few cases where code was holding onto an old-style
"mem_block" a local compatibility struct with a single field has been
swapped in to keep the invasiveness of the changes down.
Note that not all the relevant changes in this patch have in-tree test
coverage, though I validated that it all builds.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
using CONFIG_NET_BUF_POOL_USAGE monitor avail_count,
this variable should be protect.
Protecting it by using atomic variable
Signed-off-by: Ehud Naim <ehudn@marvell.com>
This patch updates the net_buf API to use k_timeout_t in essentially
all places where "s32_t timeout" was previously used. For the most
part the conversion is trivial, except for the places where
intermediate decrements of remaining timeout is needed. For this the
z_timeout_end_calc() API is used.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Remove net bufs own assertion mechanism and use the system assert
instead. This changes the assertion messaged printed from printing
expression to printing the line and file name. This provides more
context as the same expression could be asserted upon multiple times and
would then not provide enough clarity in the message.
This removes the option to enable only net buf assertions.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
NET_BUF_ASSERT() tranlates into nothing in presence
of CONFIG_ASSERT=y, which is misleading.
Use __ASSERT() in NET_BUF_ASSERT().
Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
This adds net_bug_simple_init_with_data which can be used to initialize
a net_buf_simple pointer with an external data pointer.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Use sys_put_xyz() helpers instead of memcpy() whenever possible. This
brings in straight-line inline code for pushes and adds of known,
small sizes.
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
Provides a way to clone a net_buf_simple without altering the state of
the original buffer. The primary usage scenario is for manipulating a
previously allocated PDU inside a buffer without altering the length and
offset of the buffer.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
move misc/byteorder.h to sys/byteorder.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
There are issues using lowercase min and max macros when compiling a C++
application with a third-party toolchain such as GNU ARM Embedded when
using some STL headers i.e. <chrono>.
This is because there are actual C++ functions called min and max
defined in some of the STL headers and these macros interfere with them.
By changing the macros to UPPERCASE, which is consistent with almost all
other pre-processor macros this naming conflict is avoided.
All files that use these macros have been updated.
Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
This is the same as net_buf_pull(), except that instead of returning
the new buf->data it returns the old buf->data. This was recently
discussed in github issue #12562.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
net_buf_linearize() used to clear the contents of output buffer,
just to fill it with data as the next step. The only effect that
would have is if less data was written to the output buffer. But
it's not reliable for a caller to rely on net_buf_linearize() for
that, instead callers should take care to handle any conditions
like that themselves. For example, a caller which wants to process
the data as zero-terminated string, must reserve a byte for it
in the output buffer explicitly (and set it to zero).
The only in-tree user which relied on clearing output buffer was
wncm14a2a.c. But either had buffer sizes calculated very precisely
to always accommodate extra trailing zero byte (without providing
code comments about this), or arguably could suffer from buffer
overruns (at least if data received from a modem was invalid and
filled up all destination buffer, leaving no space for trailing
zero).
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Don't try to find "errors" in the values of dst_len and len params
passed to net_buf_linearize(). Instead, do what entails with the
common sense from the values passed in, specifically:
1. Never read more than dst_len (or it would lead to buffer
overflow).
2. It's absolutely ok to read than specified by "len" param, that's
why this function returns number of bytes read in the first place.
The motivation for this change is that it's not useful with its
current behavior. For example, a number of Ethernet drivers linearize
a packet to send, but each does it with its own duplicated adhoc
routine, because net_buf_linearize() would just return error for the
natural use of:
net_buf_linearize(buf, sizeof(buf), pkt->frags, 0, sizeof(buf));
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Under GNU C, sizeof(void) = 1. This commit merely makes it explicit u8.
Pointer arithmetics over void types is:
* A GNU C extension
* Not supported by Clang
* Illegal across all ISO C standards
See also: https://gcc.gnu.org/onlinedocs/gcc/Pointer-Arith.html
Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
The return of memset is never checked. This patch explicitly ignore
the return to avoid MISRA-C violations.
The only directory excluded directory was ext/* since it contains
only imported code.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Even though the net_buf implementation may (and does currently)
internally use u16_t for lengths, keep the public facing API
consistent by using size_t.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
This makes the net_buf_append_bytes() API consistent with all other
net_buf APIs that take a pointer to arbitrary data.
Fixes#9283
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
This change moves the logic for linearize and append_bytes from
the net_pkt sources into the net_buf sources where it can be
made available to layers which to not depend on net_pkt. It also,
adds a new net_buf_skip() function which can be used to iterated
through a list of net_buf (freeing the buffers as it goes).
For the append_bytes function to be generic in nature, a net_buf
allocator callback was created. Callers of append_bytes pass in
the callback which determines where the resulting net_buf is
allocated from.
Also, the dst buffer in linearize is now cleared prior to copy
(this was an addition from the code moved from net_pkt).
In order to preserve existing callers, the original functions are
left in the net_pkt layer, but now merely act as wrappers.
Signed-off-by: Michael Scott <mike@foundries.io>
Difference being that the data is not, then, allocated from the pool.
Only the net_buf is.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Introduce two new "standard" data allocators to net_buf. There are now
three in total:
NET_BUF_POOL_FIXED_DEFINE: This is the closes to the old
implementation, i.e. fixed size chunks. It's also what the old
NET_BUF_POOL_DEFINE macro maps to.
NET_BUF_POOL_HEAP_DEFINE: uses the OS heap
NET_BUF_POOL_VAR_DEFINE: defines a variable sized allocator using
k_mem_pool (this is all that there was in my first draft of this
feature)
Currently the variable length allocators (HEAP & VAR) support
reference counted data payloads, i.e. cheap cloning. The FIXED
allocator does not currentlty support this to allow for the simplest
possible implementation, but the support can be added later if
desired.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Redesign of the net_buf_simple and net_buf structs, where the data
payload portion is split to a separately allocated chunk of memory. In
practice this means that buf->__buf becomes a pointer from having just
been a marker (empty array) for where the payload begins right after
the meta-data.
Fixes#3283
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Now that net_buf has "native" support for sys_slist_t in the form of
the sys_snode_t member, there's a danger people will forget to clear
out buf->frags when getting buffers from a list directly with
sys_slist_get(). This is analogous to the reason why we have
net_buf_get/put APIs instead of using k_fifo_get/put.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Add a net_buf_id() API which translates a buffer into a zero-based
index, based on its placement in the buffer pool. This can be useful
if you want to associate an external array of meta-data contexts with
the buffers of a pool.
The added value of this API is slightly limited at the moment, since
the net_buf API allows custom user-data sizes for each pool (i.e. the
user data can be used instead of a separately allocated meta-data
array). However, there's some refactoring coming soon which will unify
all net_buf structs to have the same fixed (and typically small)
amount of user data. In such cases it may be desirable to have
external user data in order not to inflate all buffers in the system
because of a single pool needing the extra memory.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Moving the net_buf_pool objects to a dedicated area lets us access
them by array offset into this area instead of directly by pointer.
This helps reduce the size of net_buf objects by 4 bytes.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.
Jira: ZEP-2051
Change-Id: I4ec03eb2183d59ef86ea2c20d956e5d272656837
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>