Vendor dependencies for 0.3.0 release

This commit is contained in:
2025-09-27 10:29:08 -05:00
parent 0c8d39d483
commit 82ab7f317b
26803 changed files with 16134934 additions and 0 deletions

View File

@@ -0,0 +1 @@
{"files":{"Cargo.toml":"2eb824a3ce811c56f5cbae8907190759851c8ad4a0644a2198f6bd118d9845f2","LICENSE-MIT":"cbb95c14fc8f3cd76a53dc7eb6be80b4ac06dc2833881027c998f748b7fea503","README.md":"4deecc83b525ea277b10b985fac0ee11a39d3c17e0b8e7edf58801b9ebf25718","src/ext.rs":"a4197004956e751d430ba4a83d620699e21aa57bc165ff130d8282365bac5f25","src/lib.rs":"43a73129d00fad0759696e9ea13cde08f0350e022dbc201c0569091ed5412419","src/small_float.rs":"65e6d46991a872caa15dce00be1b8c3da0c86199899929f8e42934678704f001","src/tests.rs":"599afbe9ec140d5cdf1286d26d93de5f443cdf641c012d578001ff6efac6aacb"},"package":"e234d535da3521eb95106f40f0b73483d80bfb3aacf27c40d7e2b72f1a3e00a2"}

36
vendor/offset-allocator/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,36 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2021"
name = "offset-allocator"
version = "0.2.0"
authors = ["Patrick Walton <pcwalton@mimiga.net>"]
build = false
autobins = false
autoexamples = false
autotests = false
autobenches = false
description = "A port of Sebastian Aaltonen's `OffsetAllocator` to Rust"
readme = "README.md"
keywords = ["memory-management"]
license = "MIT"
repository = "https://github.com/pcwalton/offset-allocator/"
[lib]
name = "offset_allocator"
path = "src/lib.rs"
[dependencies.log]
version = "0.4"
[dependencies.nonmax]
version = "0.5"

21
vendor/offset-allocator/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Sebastian Aaltonen, Patrick Walton
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

52
vendor/offset-allocator/README.md vendored Normal file
View File

@@ -0,0 +1,52 @@
# `offset-allocator`
## Overview
This is a port of [Sebastian Aaltonen's `OffsetAllocator`] package for C++ to 100% safe Rust. It's a fast, simple, hard real time allocator. This is especially useful for managing GPU resources, and the goal is to use it in [Bevy].
The port has been made into more or less idiomatic Rust but is otherwise mostly line-for-line, preserving comments. That way, patches for the original `OffsetAllocator` should be readily transferable to this Rust port.
Please note that `offset-allocator` isn't a Rust allocator conforming to the `GlobalAlloc` trait. You can't use this crate as a drop-in replacement for the system allocator, `jemalloc`, `wee_alloc`, etc. The general algorithm that this crate uses could be adapted to construct a Rust allocator, but that's beyond the scope of this particular implementation. This is by design, so that this allocator can be used to manage resources that aren't just CPU memory: in particular, you can manage allocations inside GPU buffers with it. By contrast, Rust allocators are hard-wired to the CPU and can't be used to manage GPU resources.
## Description
This allocator is completely agnostic to what it's allocating: it only knows
about a contiguous block of memory of a specific size. That size need not be in
bytes: this is especially useful when allocating inside a buffer of fixed-size
structures. For example, if using this allocator to divide up a GPU index
buffer object, one might want to treat the units of allocation as 32-bit
floats.
From [the original README]:
> Fast hard realtime O(1) offset allocator with minimal fragmentation.
> Uses 256 bins with 8 bit floating point distribution (3 bit mantissa + 5 bit exponent) and a two level bitfield to find the next available bin using 2x LZCNT instructions to make all operations O(1). Bin sizes following the floating point distribution ensures hard bounds for memory overhead percentage regarless of size class. Pow2 bins would waste up to +100% memory (+50% on average). Our float bins waste up to +12.5% (+6.25% on average).
> The allocation metadata is stored in a separate data structure, making this allocator suitable for sub-allocating any resources, such as GPU heaps, buffers and arrays. Returns an offset to the first element of the allocated contiguous range.
## References
Again per [the original README]:
> This allocator is similar to the two-level segregated fit (TLSF) algorithm.
> Comparison paper shows that TLSF algorithm provides best in class performance and fragmentation: <https://www.researchgate.net/profile/Alfons-Crespo/publication/234785757_A_comparison_of_memory_allocators_for_real-time_applications/links/5421d8550cf2a39f4af765f4/A-comparison-of-memory-allocators-for-real-time-applications.pdf>
## Author
C++ version: Sebastian Aaltonen
Rust port: Patrick Walton, @pcwalton
## License
Licensed under the MIT license. See `LICENSE-MIT` for details.
## Code of conduct
`offset-allocator` follows the same code of conduct as Rust itself. Reports can be made to the project authors.
[Sebastian Aaltonen's `OffsetAllocator`]: https://github.com/sebbbi/OffsetAllocator
[the original README]: https://github.com/sebbbi/OffsetAllocator/blob/main/README.md
[Bevy]: https://github.com/bevyengine/bevy/

9
vendor/offset-allocator/src/ext.rs vendored Normal file
View File

@@ -0,0 +1,9 @@
//! Extension functions not present in the original C++ `OffsetAllocator`.
use crate::small_float;
/// Returns the minimum allocator size needed to hold an object of the given
/// size.
pub fn min_allocator_size(needed_object_size: u32) -> u32 {
small_float::float_to_uint(small_float::uint_to_float_round_up(needed_object_size))
}

592
vendor/offset-allocator/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,592 @@
// offset-allocator/src/lib.rs
#![doc = include_str!("../README.md")]
#![deny(unsafe_code)]
#![warn(missing_docs)]
use std::fmt::{Debug, Display, Formatter, Result as FmtResult};
use log::debug;
use nonmax::{NonMaxU16, NonMaxU32};
pub mod ext;
mod small_float;
#[cfg(test)]
mod tests;
const NUM_TOP_BINS: usize = 32;
const BINS_PER_LEAF: usize = 8;
const TOP_BINS_INDEX_SHIFT: u32 = 3;
const LEAF_BINS_INDEX_MASK: u32 = 7;
const NUM_LEAF_BINS: usize = NUM_TOP_BINS * BINS_PER_LEAF;
/// Determines the number of allocations that the allocator supports.
///
/// By default, [`Allocator`] and related functions use `u32`, which allows for
/// `u32::MAX - 1` allocations. You can, however, use `u16` instead, which
/// causes the allocator to use less memory but limits the number of allocations
/// within a single allocator to at most 65,534.
pub trait NodeIndex: Clone + Copy + Default {
/// The `NonMax` version of this type.
///
/// This is used extensively to optimize `enum` representations.
type NonMax: NodeIndexNonMax + TryFrom<Self> + Into<Self>;
/// The maximum value representable in this type.
const MAX: u32;
/// Converts from a unsigned 32-bit integer to an instance of this type.
fn from_u32(val: u32) -> Self;
/// Converts this type to an unsigned machine word.
fn to_usize(self) -> usize;
}
/// The `NonMax` version of the [`NodeIndex`].
///
/// For example, for `u32`, the `NonMax` version is [`NonMaxU32`].
pub trait NodeIndexNonMax: Clone + Copy + PartialEq + Default + Debug + Display {
/// Converts this type to an unsigned machine word.
fn to_usize(self) -> usize;
}
/// An allocator that manages a single contiguous chunk of space and hands out
/// portions of it as requested.
pub struct Allocator<NI = u32>
where
NI: NodeIndex,
{
size: u32,
max_allocs: u32,
free_storage: u32,
used_bins_top: u32,
used_bins: [u8; NUM_TOP_BINS],
bin_indices: [Option<NI::NonMax>; NUM_LEAF_BINS],
nodes: Vec<Node<NI>>,
free_nodes: Vec<NI::NonMax>,
free_offset: u32,
}
/// A single allocation.
#[derive(Clone, Copy)]
pub struct Allocation<NI = u32>
where
NI: NodeIndex,
{
/// The location of this allocation within the buffer.
pub offset: NI,
/// The node index associated with this allocation.
metadata: NI::NonMax,
}
/// Provides a summary of the state of the allocator, including space remaining.
#[derive(Debug)]
pub struct StorageReport {
/// The amount of free space left.
pub total_free_space: u32,
/// The maximum potential size of a single contiguous allocation.
pub largest_free_region: u32,
}
/// Provides a detailed accounting of each bin within the allocator.
#[derive(Debug)]
pub struct StorageReportFull {
/// Each bin within the allocator.
pub free_regions: [StorageReportFullRegion; NUM_LEAF_BINS],
}
/// A detailed accounting of each allocator bin.
#[derive(Clone, Copy, Debug, Default)]
pub struct StorageReportFullRegion {
/// The size of the bin, in units.
pub size: u32,
/// The number of allocations in the bin.
pub count: u32,
}
#[derive(Clone, Copy, Default)]
struct Node<NI = u32>
where
NI: NodeIndex,
{
data_offset: u32,
data_size: u32,
bin_list_prev: Option<NI::NonMax>,
bin_list_next: Option<NI::NonMax>,
neighbor_prev: Option<NI::NonMax>,
neighbor_next: Option<NI::NonMax>,
used: bool, // TODO: Merge as bit flag
}
// Utility functions
fn find_lowest_bit_set_after(bit_mask: u32, start_bit_index: u32) -> Option<NonMaxU32> {
let mask_before_start_index = (1 << start_bit_index) - 1;
let mask_after_start_index = !mask_before_start_index;
let bits_after = bit_mask & mask_after_start_index;
if bits_after == 0 {
None
} else {
NonMaxU32::try_from(bits_after.trailing_zeros()).ok()
}
}
impl<NI> Allocator<NI>
where
NI: NodeIndex,
{
/// Creates a new allocator, managing a contiguous block of memory of `size`
/// units, with a default reasonable number of maximum allocations.
pub fn new(size: u32) -> Self {
Allocator::with_max_allocs(size, u32::min(128 * 1024, NI::MAX - 1))
}
/// Creates a new allocator, managing a contiguous block of memory of `size`
/// units, with the given number of maximum allocations.
///
/// Note that the maximum number of allocations must be less than
/// [`NodeIndex::MAX`] minus one. If this restriction is violated, this
/// constructor will panic.
pub fn with_max_allocs(size: u32, max_allocs: u32) -> Self {
assert!(max_allocs < NI::MAX - 1);
let mut this = Self {
size,
max_allocs,
free_storage: 0,
used_bins_top: 0,
free_offset: 0,
used_bins: [0; NUM_TOP_BINS],
bin_indices: [None; NUM_LEAF_BINS],
nodes: vec![],
free_nodes: vec![],
};
this.reset();
this
}
/// Clears out all allocations.
pub fn reset(&mut self) {
self.free_storage = 0;
self.used_bins_top = 0;
self.free_offset = self.max_allocs - 1;
self.used_bins.iter_mut().for_each(|bin| *bin = 0);
self.bin_indices.iter_mut().for_each(|index| *index = None);
self.nodes = vec![Node::default(); self.max_allocs as usize];
// Freelist is a stack. Nodes in inverse order so that [0] pops first.
self.free_nodes = (0..self.max_allocs)
.map(|i| {
NI::NonMax::try_from(NI::from_u32(self.max_allocs - i - 1)).unwrap_or_default()
})
.collect();
// Start state: Whole storage as one big node
// Algorithm will split remainders and push them back as smaller nodes
self.insert_node_into_bin(self.size, 0);
}
/// Allocates a block of `size` elements and returns its allocation.
///
/// If there's not enough contiguous space for this allocation, returns
/// None.
pub fn allocate(&mut self, size: u32) -> Option<Allocation<NI>> {
// Out of allocations?
if self.free_offset == 0 {
return None;
}
// Round up to bin index to ensure that alloc >= bin
// Gives us min bin index that fits the size
let min_bin_index = small_float::uint_to_float_round_up(size);
let min_top_bin_index = min_bin_index >> TOP_BINS_INDEX_SHIFT;
let min_leaf_bin_index = min_bin_index & LEAF_BINS_INDEX_MASK;
let mut top_bin_index = min_top_bin_index;
let mut leaf_bin_index = None;
// If top bin exists, scan its leaf bin. This can fail (NO_SPACE).
if (self.used_bins_top & (1 << top_bin_index)) != 0 {
leaf_bin_index = find_lowest_bit_set_after(
self.used_bins[top_bin_index as usize] as _,
min_leaf_bin_index,
);
}
// If we didn't find space in top bin, we search top bin from +1
let leaf_bin_index = match leaf_bin_index {
Some(leaf_bin_index) => leaf_bin_index,
None => {
top_bin_index =
find_lowest_bit_set_after(self.used_bins_top, min_top_bin_index + 1)?.into();
// All leaf bins here fit the alloc, since the top bin was
// rounded up. Start leaf search from bit 0.
//
// NOTE: This search can't fail since at least one leaf bit was
// set because the top bit was set.
NonMaxU32::try_from(self.used_bins[top_bin_index as usize].trailing_zeros())
.unwrap()
}
};
let bin_index = (top_bin_index << TOP_BINS_INDEX_SHIFT) | u32::from(leaf_bin_index);
// Pop the top node of the bin. Bin top = node.next.
let node_index = self.bin_indices[bin_index as usize].unwrap();
let node = &mut self.nodes[node_index.to_usize()];
let node_total_size = node.data_size;
node.data_size = size;
node.used = true;
self.bin_indices[bin_index as usize] = node.bin_list_next;
if let Some(bin_list_next) = node.bin_list_next {
self.nodes[bin_list_next.to_usize()].bin_list_prev = None;
}
self.free_storage -= node_total_size;
debug!(
"Free storage: {} (-{}) (allocate)",
self.free_storage, node_total_size
);
// Bin empty?
if self.bin_indices[bin_index as usize].is_none() {
// Remove a leaf bin mask bit
self.used_bins[top_bin_index as usize] &= !(1 << u32::from(leaf_bin_index));
// All leaf bins empty?
if self.used_bins[top_bin_index as usize] == 0 {
// Remove a top bin mask bit
self.used_bins_top &= !(1 << top_bin_index);
}
}
// Push back remainder N elements to a lower bin
let remainder_size = node_total_size - size;
if remainder_size > 0 {
let Node {
data_offset,
neighbor_next,
..
} = self.nodes[node_index.to_usize()];
let new_node_index = self.insert_node_into_bin(remainder_size, data_offset + size);
// Link nodes next to each other so that we can merge them later if both are free
// And update the old next neighbor to point to the new node (in middle)
let node = &mut self.nodes[node_index.to_usize()];
if let Some(neighbor_next) = node.neighbor_next {
self.nodes[neighbor_next.to_usize()].neighbor_prev = Some(new_node_index);
}
self.nodes[new_node_index.to_usize()].neighbor_prev = Some(node_index);
self.nodes[new_node_index.to_usize()].neighbor_next = neighbor_next;
self.nodes[node_index.to_usize()].neighbor_next = Some(new_node_index);
}
let node = &mut self.nodes[node_index.to_usize()];
Some(Allocation {
offset: NI::from_u32(node.data_offset),
metadata: node_index,
})
}
/// Frees an allocation, returning the data to the heap.
///
/// If the allocation has already been freed, the behavior is unspecified.
/// It may or may not panic. Note that, because this crate contains no
/// unsafe code, the memory safe of the allocator *itself* will be
/// uncompromised, even on double free.
pub fn free(&mut self, allocation: Allocation<NI>) {
let node_index = allocation.metadata;
// Merge with neighbors…
let Node {
data_offset: mut offset,
data_size: mut size,
used,
..
} = self.nodes[node_index.to_usize()];
// Double delete check
assert!(used);
if let Some(neighbor_prev) = self.nodes[node_index.to_usize()].neighbor_prev {
if !self.nodes[neighbor_prev.to_usize()].used {
// Previous (contiguous) free node: Change offset to previous
// node offset. Sum sizes
let prev_node = &self.nodes[neighbor_prev.to_usize()];
offset = prev_node.data_offset;
size += prev_node.data_size;
// Remove node from the bin linked list and put it in the
// freelist
self.remove_node_from_bin(neighbor_prev);
let prev_node = &self.nodes[neighbor_prev.to_usize()];
debug_assert_eq!(prev_node.neighbor_next, Some(node_index));
self.nodes[node_index.to_usize()].neighbor_prev = prev_node.neighbor_prev;
}
}
if let Some(neighbor_next) = self.nodes[node_index.to_usize()].neighbor_next {
if !self.nodes[neighbor_next.to_usize()].used {
// Next (contiguous) free node: Offset remains the same. Sum
// sizes.
let next_node = &self.nodes[neighbor_next.to_usize()];
size += next_node.data_size;
// Remove node from the bin linked list and put it in the
// freelist
self.remove_node_from_bin(neighbor_next);
let next_node = &self.nodes[neighbor_next.to_usize()];
debug_assert_eq!(next_node.neighbor_prev, Some(node_index));
self.nodes[node_index.to_usize()].neighbor_next = next_node.neighbor_next;
}
}
let Node {
neighbor_next,
neighbor_prev,
..
} = self.nodes[node_index.to_usize()];
// Insert the removed node to freelist
debug!(
"Putting node {} into freelist[{}] (free)",
node_index,
self.free_offset + 1
);
self.free_offset += 1;
self.free_nodes[self.free_offset as usize] = node_index;
// Insert the (combined) free node to bin
let combined_node_index = self.insert_node_into_bin(size, offset);
// Connect neighbors with the new combined node
if let Some(neighbor_next) = neighbor_next {
self.nodes[combined_node_index.to_usize()].neighbor_next = Some(neighbor_next);
self.nodes[neighbor_next.to_usize()].neighbor_prev = Some(combined_node_index);
}
if let Some(neighbor_prev) = neighbor_prev {
self.nodes[combined_node_index.to_usize()].neighbor_prev = Some(neighbor_prev);
self.nodes[neighbor_prev.to_usize()].neighbor_next = Some(combined_node_index);
}
}
fn insert_node_into_bin(&mut self, size: u32, data_offset: u32) -> NI::NonMax {
// Round down to bin index to ensure that bin >= alloc
let bin_index = small_float::uint_to_float_round_down(size);
let top_bin_index = bin_index >> TOP_BINS_INDEX_SHIFT;
let leaf_bin_index = bin_index & LEAF_BINS_INDEX_MASK;
// Bin was empty before?
if self.bin_indices[bin_index as usize].is_none() {
// Set bin mask bits
self.used_bins[top_bin_index as usize] |= 1 << leaf_bin_index;
self.used_bins_top |= 1 << top_bin_index;
}
// Take a freelist node and insert on top of the bin linked list (next = old top)
let top_node_index = self.bin_indices[bin_index as usize];
let free_offset = self.free_offset;
let node_index = self.free_nodes[free_offset as usize];
self.free_offset -= 1;
debug!(
"Getting node {} from freelist[{}]",
node_index,
self.free_offset + 1
);
self.nodes[node_index.to_usize()] = Node {
data_offset,
data_size: size,
bin_list_next: top_node_index,
..Node::default()
};
if let Some(top_node_index) = top_node_index {
self.nodes[top_node_index.to_usize()].bin_list_prev = Some(node_index);
}
self.bin_indices[bin_index as usize] = Some(node_index);
self.free_storage += size;
debug!(
"Free storage: {} (+{}) (insert_node_into_bin)",
self.free_storage, size
);
node_index
}
fn remove_node_from_bin(&mut self, node_index: NI::NonMax) {
// Copy the node to work around borrow check.
let node = self.nodes[node_index.to_usize()];
match node.bin_list_prev {
Some(bin_list_prev) => {
// Easy case: We have previous node. Just remove this node from the middle of the list.
self.nodes[bin_list_prev.to_usize()].bin_list_next = node.bin_list_next;
if let Some(bin_list_next) = node.bin_list_next {
self.nodes[bin_list_next.to_usize()].bin_list_prev = node.bin_list_prev;
}
}
None => {
// Hard case: We are the first node in a bin. Find the bin.
// Round down to bin index to ensure that bin >= alloc
let bin_index = small_float::uint_to_float_round_down(node.data_size);
let top_bin_index = (bin_index >> TOP_BINS_INDEX_SHIFT) as usize;
let leaf_bin_index = (bin_index & LEAF_BINS_INDEX_MASK) as usize;
self.bin_indices[bin_index as usize] = node.bin_list_next;
if let Some(bin_list_next) = node.bin_list_next {
self.nodes[bin_list_next.to_usize()].bin_list_prev = None;
}
// Bin empty?
if self.bin_indices[bin_index as usize].is_none() {
// Remove a leaf bin mask bit
self.used_bins[top_bin_index as usize] &= !(1 << leaf_bin_index);
// All leaf bins empty?
if self.used_bins[top_bin_index as usize] == 0 {
// Remove a top bin mask bit
self.used_bins_top &= !(1 << top_bin_index);
}
}
}
}
// Insert the node to freelist
debug!(
"Putting node {} into freelist[{}] (remove_node_from_bin)",
node_index,
self.free_offset + 1
);
self.free_offset += 1;
self.free_nodes[self.free_offset as usize] = node_index;
self.free_storage -= node.data_size;
debug!(
"Free storage: {} (-{}) (remove_node_from_bin)",
self.free_storage, node.data_size
);
}
/// Returns the *used* size of an allocation.
///
/// Note that this may be larger than the size requested at allocation time,
/// due to rounding.
pub fn allocation_size(&self, allocation: Allocation<NI>) -> u32 {
self.nodes
.get(allocation.metadata.to_usize())
.map(|node| node.data_size)
.unwrap_or_default()
}
/// Returns a structure containing the amount of free space remaining, as
/// well as the largest amount that can be allocated at once.
pub fn storage_report(&self) -> StorageReport {
let mut largest_free_region = 0;
let mut free_storage = 0;
// Out of allocations? -> Zero free space
if self.free_offset > 0 {
free_storage = self.free_storage;
if self.used_bins_top > 0 {
let top_bin_index = 31 - self.used_bins_top.leading_zeros();
let leaf_bin_index =
31 - (self.used_bins[top_bin_index as usize] as u32).leading_zeros();
largest_free_region = small_float::float_to_uint(
(top_bin_index << TOP_BINS_INDEX_SHIFT) | leaf_bin_index,
);
debug_assert!(free_storage >= largest_free_region);
}
}
StorageReport {
total_free_space: free_storage,
largest_free_region,
}
}
/// Returns detailed information about the number of allocations in each
/// bin.
pub fn storage_report_full(&self) -> StorageReportFull {
let mut report = StorageReportFull::default();
for i in 0..NUM_LEAF_BINS {
let mut count = 0;
let mut maybe_node_index = self.bin_indices[i];
while let Some(node_index) = maybe_node_index {
maybe_node_index = self.nodes[node_index.to_usize()].bin_list_next;
count += 1;
}
report.free_regions[i] = StorageReportFullRegion {
size: small_float::float_to_uint(i as u32),
count,
}
}
report
}
}
impl Default for StorageReportFull {
fn default() -> Self {
Self {
free_regions: [Default::default(); NUM_LEAF_BINS],
}
}
}
impl<NI> Debug for Allocator<NI>
where
NI: NodeIndex,
{
fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {
self.storage_report().fmt(f)
}
}
impl NodeIndex for u32 {
type NonMax = NonMaxU32;
const MAX: u32 = u32::MAX;
fn from_u32(val: u32) -> Self {
val
}
fn to_usize(self) -> usize {
self as usize
}
}
impl NodeIndex for u16 {
type NonMax = NonMaxU16;
const MAX: u32 = u16::MAX as u32;
fn from_u32(val: u32) -> Self {
val as u16
}
fn to_usize(self) -> usize {
self as usize
}
}
impl NodeIndexNonMax for NonMaxU32 {
fn to_usize(self) -> usize {
u32::from(self) as usize
}
}
impl NodeIndexNonMax for NonMaxU16 {
fn to_usize(self) -> usize {
u16::from(self) as usize
}
}

View File

@@ -0,0 +1,65 @@
// offset-allocator/src/small_float.rs
pub const MANTISSA_BITS: u32 = 3;
pub const MANTISSA_VALUE: u32 = 1 << MANTISSA_BITS;
pub const MANTISSA_MASK: u32 = MANTISSA_VALUE - 1;
// Bin sizes follow floating point (exponent + mantissa) distribution (piecewise linear log approx)
// This ensures that for each size class, the average overhead percentage stays the same
pub fn uint_to_float_round_up(size: u32) -> u32 {
let mut exp = 0;
let mut mantissa;
if size < MANTISSA_VALUE {
// Denorm: 0..(MANTISSA_VALUE-1)
mantissa = size
} else {
// Normalized: Hidden high bit always 1. Not stored. Just like float.
let leading_zeros = size.leading_zeros();
let highest_set_bit = 31 - leading_zeros;
let mantissa_start_bit = highest_set_bit - MANTISSA_BITS;
exp = mantissa_start_bit + 1;
mantissa = (size >> mantissa_start_bit) & MANTISSA_MASK;
let low_bits_mask = (1 << mantissa_start_bit) - 1;
// Round up!
if (size & low_bits_mask) != 0 {
mantissa += 1;
}
}
// + allows mantissa->exp overflow for round up
(exp << MANTISSA_BITS) + mantissa
}
pub fn uint_to_float_round_down(size: u32) -> u32 {
let mut exp = 0;
let mantissa;
if size < MANTISSA_VALUE {
// Denorm: 0..(MANTISSA_VALUE-1)
mantissa = size
} else {
// Normalized: Hidden high bit always 1. Not stored. Just like float.
let leading_zeros = size.leading_zeros();
let highest_set_bit = 31 - leading_zeros;
let mantissa_start_bit = highest_set_bit - MANTISSA_BITS;
exp = mantissa_start_bit + 1;
mantissa = (size >> mantissa_start_bit) & MANTISSA_MASK;
}
(exp << MANTISSA_BITS) | mantissa
}
pub fn float_to_uint(float_value: u32) -> u32 {
let exponent = float_value >> MANTISSA_BITS;
let mantissa = float_value & MANTISSA_MASK;
if exponent == 0 {
mantissa
} else {
(mantissa | MANTISSA_VALUE) << (exponent - 1)
}
}

276
vendor/offset-allocator/src/tests.rs vendored Normal file
View File

@@ -0,0 +1,276 @@
// offset-allocator/src/tests.rs
use std::array;
use crate::{ext, small_float, Allocator};
#[test]
fn small_float_uint_to_float() {
// Denorms, exp=1 and exp=2 + mantissa = 0 are all precise.
// NOTE: Assuming 8 value (3 bit) mantissa.
// If this test fails, please change this assumption!
let precise_number_count = 17;
for i in 0..precise_number_count {
let round_up = small_float::uint_to_float_round_up(i);
let round_down = small_float::uint_to_float_round_down(i);
assert_eq!(i, round_up);
assert_eq!(i, round_down);
}
// Test some random picked numbers
struct NumberFloatUpDown {
number: u32,
up: u32,
down: u32,
}
let test_data = [
NumberFloatUpDown {
number: 17,
up: 17,
down: 16,
},
NumberFloatUpDown {
number: 118,
up: 39,
down: 38,
},
NumberFloatUpDown {
number: 1024,
up: 64,
down: 64,
},
NumberFloatUpDown {
number: 65536,
up: 112,
down: 112,
},
NumberFloatUpDown {
number: 529445,
up: 137,
down: 136,
},
NumberFloatUpDown {
number: 1048575,
up: 144,
down: 143,
},
];
for v in test_data {
let round_up = small_float::uint_to_float_round_up(v.number);
let round_down = small_float::uint_to_float_round_down(v.number);
assert_eq!(round_up, v.up);
assert_eq!(round_down, v.down);
}
}
#[test]
fn small_float_float_to_uint() {
// Denorms, exp=1 and exp=2 + mantissa = 0 are all precise.
// NOTE: Assuming 8 value (3 bit) mantissa.
// If this test fails, please change this assumption!
let precise_number_count = 17;
for i in 0..precise_number_count {
let v = small_float::float_to_uint(i);
assert_eq!(i, v);
}
// Test that float->uint->float conversion is precise for all numbers
// NOTE: Test values < 240. 240->4G = overflows 32 bit integer
for i in 0..240 {
let v = small_float::float_to_uint(i);
let round_up = small_float::uint_to_float_round_up(v);
let round_down = small_float::uint_to_float_round_down(v);
assert_eq!(i, round_up);
assert_eq!(i, round_down);
}
}
#[test]
fn basic_offset_allocator() {
let mut allocator = Allocator::new(1024 * 1024 * 256);
let a = allocator.allocate(1337).unwrap();
let offset: u32 = a.offset;
assert_eq!(offset, 0);
allocator.free(a);
}
#[test]
fn allocate_offset_allocator_simple() {
let mut allocator: Allocator<u32> = Allocator::new(1024 * 1024 * 256);
// Free merges neighbor empty nodes. Next allocation should also have offset = 0
let a = allocator.allocate(0).unwrap();
assert_eq!(a.offset, 0);
let b = allocator.allocate(1).unwrap();
assert_eq!(b.offset, 0);
let c = allocator.allocate(123).unwrap();
assert_eq!(c.offset, 1);
let d = allocator.allocate(1234).unwrap();
assert_eq!(d.offset, 124);
allocator.free(a);
allocator.free(b);
allocator.free(c);
allocator.free(d);
// End: Validate that allocator has no fragmentation left. Should be 100% clean.
let validate_all = allocator.allocate(1024 * 1024 * 256).unwrap();
assert_eq!(validate_all.offset, 0);
allocator.free(validate_all);
}
#[test]
fn allocate_offset_allocator_merge_trivial() {
let mut allocator: Allocator<u32> = Allocator::new(1024 * 1024 * 256);
// Free merges neighbor empty nodes. Next allocation should also have offset = 0
let a = allocator.allocate(1337).unwrap();
assert_eq!(a.offset, 0);
allocator.free(a);
let b = allocator.allocate(1337).unwrap();
assert_eq!(b.offset, 0);
allocator.free(b);
// End: Validate that allocator has no fragmentation left. Should be 100% clean.
let validate_all = allocator.allocate(1024 * 1024 * 256).unwrap();
assert_eq!(validate_all.offset, 0);
allocator.free(validate_all);
}
#[test]
fn allocate_offset_allocator_reuse_trivial() {
let mut allocator: Allocator<u32> = Allocator::new(1024 * 1024 * 256);
// Allocator should reuse node freed by A since the allocation C fits in the same bin (using pow2 size to be sure)
let a = allocator.allocate(1024).unwrap();
assert_eq!(a.offset, 0);
let b = allocator.allocate(3456).unwrap();
assert_eq!(b.offset, 1024);
allocator.free(a);
let c = allocator.allocate(1024).unwrap();
assert_eq!(c.offset, 0);
allocator.free(c);
allocator.free(b);
// End: Validate that allocator has no fragmentation left. Should be 100% clean.
let validate_all = allocator.allocate(1024 * 1024 * 256).unwrap();
assert_eq!(validate_all.offset, 0);
allocator.free(validate_all);
}
#[test]
fn allocate_offset_allocator_reuse_complex() {
let mut allocator: Allocator<u32> = Allocator::new(1024 * 1024 * 256);
// Allocator should not reuse node freed by A since the allocation C doesn't fits in the same bin
// However node D and E fit there and should reuse node from A
let a = allocator.allocate(1024).unwrap();
assert_eq!(a.offset, 0);
let b = allocator.allocate(3456).unwrap();
assert_eq!(b.offset, 1024);
allocator.free(a);
let c = allocator.allocate(2345).unwrap();
assert_eq!(c.offset, 1024 + 3456);
let d = allocator.allocate(456).unwrap();
assert_eq!(d.offset, 0);
let e = allocator.allocate(512).unwrap();
assert_eq!(e.offset, 456);
let report = allocator.storage_report();
assert_eq!(
report.total_free_space,
1024 * 1024 * 256 - 3456 - 2345 - 456 - 512
);
assert_ne!(report.largest_free_region, report.total_free_space);
allocator.free(c);
allocator.free(d);
allocator.free(b);
allocator.free(e);
// End: Validate that allocator has no fragmentation left. Should be 100% clean.
let validate_all = allocator.allocate(1024 * 1024 * 256).unwrap();
assert_eq!(validate_all.offset, 0);
allocator.free(validate_all);
}
#[test]
fn allocate_offset_allocator_zero_fragmentation() {
let mut allocator: Allocator<u32> = Allocator::new(1024 * 1024 * 256);
// Allocate 256x 1MB. Should fit. Then free four random slots and reallocate four slots.
// Plus free four contiguous slots an allocate 4x larger slot. All must be zero fragmentation!
let mut allocations: [_; 256] = array::from_fn(|i| {
let allocation = allocator.allocate(1024 * 1024).unwrap();
assert_eq!(allocation.offset, i as u32 * 1024 * 1024);
allocation
});
let report = allocator.storage_report();
assert_eq!(report.total_free_space, 0);
assert_eq!(report.largest_free_region, 0);
// Free four random slots
allocator.free(allocations[243]);
allocator.free(allocations[5]);
allocator.free(allocations[123]);
allocator.free(allocations[95]);
// Free four contiguous slots (allocator must merge)
allocator.free(allocations[151]);
allocator.free(allocations[152]);
allocator.free(allocations[153]);
allocator.free(allocations[154]);
allocations[243] = allocator.allocate(1024 * 1024).unwrap();
allocations[5] = allocator.allocate(1024 * 1024).unwrap();
allocations[123] = allocator.allocate(1024 * 1024).unwrap();
allocations[95] = allocator.allocate(1024 * 1024).unwrap();
allocations[151] = allocator.allocate(1024 * 1024 * 4).unwrap(); // 4x larger
for (i, allocation) in allocations.iter().enumerate() {
if !(152..155).contains(&i) {
allocator.free(*allocation);
}
}
let report2 = allocator.storage_report();
assert_eq!(report2.total_free_space, 1024 * 1024 * 256);
assert_eq!(report2.largest_free_region, 1024 * 1024 * 256);
// End: Validate that allocator has no fragmentation left. Should be 100% clean.
let validate_all = allocator.allocate(1024 * 1024 * 256).unwrap();
assert_eq!(validate_all.offset, 0);
allocator.free(validate_all);
}
#[test]
fn ext_min_allocator_size() {
// Randomly generated integers on a log distribution, σ = 10.
static TEST_OBJECT_SIZES: [u32; 42] = [
0, 1, 2, 3, 4, 5, 8, 17, 23, 36, 51, 68, 87, 151, 165, 167, 201, 223, 306, 346, 394, 411,
806, 969, 1404, 1798, 2236, 4281, 4745, 13989, 21095, 26594, 27146, 29679, 144685, 153878,
495127, 727999, 1377073, 9440387, 41994490, 68520116,
];
for needed_object_size in TEST_OBJECT_SIZES {
let allocator_size = ext::min_allocator_size(needed_object_size);
let mut allocator: Allocator<u32> = Allocator::new(allocator_size);
assert!(allocator.allocate(needed_object_size).is_some());
}
}