Vendor dependencies for 0.3.0 release

This commit is contained in:
2025-09-27 10:29:08 -05:00
parent 0c8d39d483
commit 82ab7f317b
26803 changed files with 16134934 additions and 0 deletions

View File

@@ -0,0 +1 @@
{"files":{"CONTRIBUTING.md":"62611fa83369c2cbfbfa560ed0195622627e94ba1af13879b54921c7c1013d47","Cargo.lock":"8a9c6a36367be367b834ff8d5eb00912fcb5ea4e385d87ba6990d1eeae0f2667","Cargo.toml":"34c0f7317280a449cc0f1dbc162cba6a0a5c50fcf172c432a5f86c04b8d08799","LICENSE-APACHE":"a6cba85bc92e0cff7a450b1d873c0eaa2e9fc96bf472df0247a26bec77bf3ff9","LICENSE-MIT":"508a77d2e7b51d98adeed32648ad124b7b30241a8e70b2e72c99f92d8e5874d1","README.md":"811b37257c540eb907226ab7e6c638ef6a936a298a17fb416bb70d9852697743","src/audio.rs":"7c98b938d9680c2e89471c55c40cba7e033895be039ac2650e964e99aaff497c","src/audio_output.rs":"fd863df49de1a50d94343d2eef34f431d3e4df2b8c4554a47ba263ec6104ee97","src/audio_source.rs":"2674786fc58bf9473f02ef453447a4ae20ae5d958ee3062d23f37432c3dae6b3","src/lib.rs":"5214db56ab6e91926c8de169f2badc8dc8cc6d6d97e5fcc89866b0b51cbb596d","src/pitch.rs":"8b2771e3a5aac596d9bf59e4c89ee63c3bf93cdcb25b5930297b6b2dbc7dd717","src/sinks.rs":"029b3488665317ef763fdc240a35c1751cf2fdb9919aea79d6c482240e8b8291","src/volume.rs":"8566edc928f82f04e3fd020880385b25ed4084407fdf814bec0bb39ad5fc7eae"},"package":"f2b4f6f2a5c6c0e7c6825e791d2a061c76c2d6784f114c8f24382163fabbfaaa"}

152
vendor/bevy_audio/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,152 @@
# Contributing to `bevy_audio`
This document highlights documents some general explanation and guidelines for
contributing code to this crate. It assumes knowledge of programming, but not
necessarily of audio programming specifically. It lays out rules to follow, on
top of the general programming and contribution guidelines of Bevy, that are of
particular interest for performance reasons.
This section applies to the equivalent in abstraction level to working with
nodes in the render graph, and not manipulating entities with meshes and
materials.
Note that these guidelines are general to any audio programming application, and
not just Bevy.
## Fundamentals of working with audio
### A brief introduction to digital audio signals
Audio signals, when working within a computer, are digital streams of audio
samples (historically with different types, but nowadays the values are 32-bit
floats), taken at regular intervals of each other.
How often this sampling is done is determined by the **sample rate** parameter.
This parameter is available to the users in OS settings, as well as some
applications.
The sample rate directly determines the spectrum of audio frequencies that will
be representable by the system. That limit sits at half the sample rate, meaning
that any sound with frequencies higher than that will introduce artifacts.
If you want to learn more, read about the **Nyquist sampling theorem** and
**Frequency aliasing**.
### How the computer interfaces with the sound card
When requesting for audio input or output, the OS creates a special
high-priority thread whose task it is to take in the input audio stream, and/or
produce the output stream. The audio driver passes an audio buffer that you read
from (for input) or write to (for output). The size of that buffer is also a
parameter that is configured when opening an audio stream with the sound card,
and is sometimes reflected in application settings.
Typical values for buffer size and sample rate are 512 samples at a sample rate
of 48 kHz. This means that for every 512 samples of audio the driver is going to
send to the sound card the output callback function is run in this high-priority
audio thread. Every second, as dictated by the sample rate, the sound card
needs 48 000 samples of audio data. This means that we can expect the callback
function to be run every `512/(48000 Hz)` or 10.666... ms.
This figure is also the latency of the audio engine, that is, how much time it
takes between a user interaction and hearing the effects out the speakers.
Therefore, there is a "tug of war" between decreasing the buffer size for
latency reasons, and increasing it for performance reasons. The threshold for
instantaneity in audio is around 15 ms, which is why 512 is a good value for
interactive applications.
### Real-time programming
The parts of the code running in the audio thread have exactly
`buffer_size/samplerate` seconds to complete, beyond which the audio driver
outputs silence (or worse, the previous buffer output, or garbage data), which
the user perceives as a glitch and severely deteriorates the quality of the
audio output of the engine. It is therefore critical to work with code that is
guaranteed to finish in that time.
One step to achieving this is making sure that all machines across the spectrum
of supported CPUs can reliably perform the computations needed for the game in
that amount of time, and play around with the buffer size to find the best
compromise between latency and performance. Another is to conditionally enable
certain effects for more powerful CPUs, when that is possible.
But the main step is to write code to run in the audio thread following
real-time programming guidelines. Real-time programming is a set of constraints
on code and structures that guarantees the code completes at some point, ie. it
cannot be stuck in an infinite loop nor can it trigger a deadlock situation.
Practically, the main components of real-time programming are about using
wait-free and lock-free structures. Examples of things that are *not* correct in
real-time programming are:
- Allocating anything on the heap (that is, no direct or indirect creation of a
`Vec`, `Box`, or any standard collection, as they are not designed with
real-time programming in mind)
- Locking a mutex - Generally, any kind of system call gives the OS the
opportunity to pause the thread, which is an unbounded operation as we don't
know how long the thread is going to be paused for
- Waiting by looping until some condition is met (also called a spinloop or a
spinlock)
Writing wait-free and lock-free structures is a hard task, and difficult to get
correct; however many structures already exists, and can be directly used. There
are crates for most replacements of standard collections.
### Where in the code should real-time programming principles be applied?
Any code that is directly or indirectly called by audio threads, needs to be
real-time safe.
For the Bevy engine, that is:
- In the callback of `cpal::Stream::build_input_stream` and
`cpal::Stream::build_output_stream`, and all functions called from them
- In implementations of the [`Source`] trait, and all functions called from it
Code that is run in Bevy systems do not need to be real-time safe, as they are
not run in the audio thread, but in the main game loop thread.
## Communication with the audio thread
To be able to do anything useful with audio, the thread has to be able to
communicate with the rest of the system, ie. update parameters, send/receive
audio data, etc., and all of that needs to be done within the constraints of
real-time programming, of course.
### Audio parameters
In most cases, audio parameters can be represented by an atomic floating point
value, where the game loop updates the parameter, and it gets picked up when
processing the next buffer. The downside to this approach is that the audio only
changes once per audio callback, and results in a noticeable "stair-step "
motion of the parameter. The latter can be mitigated by "smoothing" the change
over time, using a tween or linear/exponential smoothing.
Precise timing for non-interactive events (ie. on the beat) need to be setup
using a clock backed by the audio driver -- that is, counting the number of
samples processed, and deriving the time elapsed by diving by the sample rate to
get the number of seconds elapsed. The precise sample at which the parameter
needs to be changed can then be computed.
Both interactive and precise events are hard to do, and need very low latency
(ie. 64 or 128 samples for ~2 ms of latency). It is fundamentally impossible to
react to user event the very moment it is registered.
### Audio data
Audio data is generally transferred between threads with circular buffers, as
they are simple to implement, fast enough for 99% of use-cases, and are both
wait-free and lock-free. The only difficulty in using circular buffers is how
big they should be; however even going for 1 s of audio costs ~50 kB of memory,
which is small enough to not be noticeable even with potentially 100s of those
buffers.
## Additional resources for audio programming
More in-depth article about audio programming:
<http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing>
Awesome Audio DSP: <https://github.com/BillyDM/awesome-audio-dsp>

2565
vendor/bevy_audio/Cargo.lock generated vendored Normal file

File diff suppressed because it is too large Load Diff

138
vendor/bevy_audio/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,138 @@
# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO
#
# When uploading crates to the registry Cargo will automatically
# "normalize" Cargo.toml files for maximal compatibility
# with all versions of Cargo and also rewrite `path` dependencies
# to registry (e.g., crates.io) dependencies.
#
# If you are reading this file be aware that the original Cargo.toml
# will likely look very different (and much more reasonable).
# See Cargo.toml.orig for the original contents.
[package]
edition = "2024"
name = "bevy_audio"
version = "0.16.1"
build = false
autolib = false
autobins = false
autoexamples = false
autotests = false
autobenches = false
description = "Provides audio functionality for Bevy Engine"
homepage = "https://bevyengine.org"
readme = "README.md"
keywords = ["bevy"]
license = "MIT OR Apache-2.0"
repository = "https://github.com/bevyengine/bevy"
resolver = "2"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = [
"-Zunstable-options",
"--generate-link-to-definition",
]
[features]
android_shared_stdcxx = ["cpal/oboe-shared-stdcxx"]
flac = ["rodio/flac"]
minimp3 = ["rodio/minimp3"]
mp3 = ["rodio/mp3"]
symphonia-aac = ["rodio/symphonia-aac"]
symphonia-all = ["rodio/symphonia-all"]
symphonia-flac = ["rodio/symphonia-flac"]
symphonia-isomp4 = ["rodio/symphonia-isomp4"]
symphonia-vorbis = ["rodio/symphonia-vorbis"]
symphonia-wav = ["rodio/symphonia-wav"]
vorbis = ["rodio/vorbis"]
wav = ["rodio/wav"]
[lib]
name = "bevy_audio"
path = "src/lib.rs"
[dependencies.bevy_app]
version = "0.16.1"
[dependencies.bevy_asset]
version = "0.16.1"
[dependencies.bevy_derive]
version = "0.16.1"
[dependencies.bevy_ecs]
version = "0.16.1"
[dependencies.bevy_math]
version = "0.16.1"
[dependencies.bevy_reflect]
version = "0.16.1"
[dependencies.bevy_transform]
version = "0.16.1"
[dependencies.rodio]
version = "0.20"
default-features = false
[dependencies.tracing]
version = "0.1"
features = ["std"]
default-features = false
[target.'cfg(target_arch = "wasm32")'.dependencies.bevy_app]
version = "0.16.1"
features = ["web"]
default-features = false
[target.'cfg(target_arch = "wasm32")'.dependencies.bevy_reflect]
version = "0.16.1"
features = ["web"]
default-features = false
[target.'cfg(target_arch = "wasm32")'.dependencies.rodio]
version = "0.20"
features = ["wasm-bindgen"]
default-features = false
[target.'cfg(target_os = "android")'.dependencies.cpal]
version = "0.15"
optional = true
[lints.clippy]
alloc_instead_of_core = "warn"
allow_attributes = "warn"
allow_attributes_without_reason = "warn"
doc_markdown = "warn"
manual_let_else = "warn"
match_same_arms = "warn"
needless_lifetimes = "allow"
nonstandard_macro_braces = "warn"
print_stderr = "warn"
print_stdout = "warn"
ptr_as_ptr = "warn"
ptr_cast_constness = "warn"
redundant_closure_for_method_calls = "warn"
redundant_else = "warn"
ref_as_ptr = "warn"
semicolon_if_nothing_returned = "warn"
std_instead_of_alloc = "warn"
std_instead_of_core = "warn"
too_long_first_doc_paragraph = "allow"
too_many_arguments = "allow"
type_complexity = "allow"
undocumented_unsafe_blocks = "warn"
unwrap_or_default = "warn"
[lints.rust]
missing_docs = "warn"
unsafe_code = "deny"
unsafe_op_in_unsafe_fn = "warn"
unused_qualifications = "warn"
[lints.rust.unexpected_cfgs]
level = "warn"
priority = 0
check-cfg = ["cfg(docsrs_dep)"]

176
vendor/bevy_audio/LICENSE-APACHE vendored Normal file
View File

@@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

19
vendor/bevy_audio/LICENSE-MIT vendored Normal file
View File

@@ -0,0 +1,19 @@
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

7
vendor/bevy_audio/README.md vendored Normal file
View File

@@ -0,0 +1,7 @@
# Bevy Audio
[![License](https://img.shields.io/badge/license-MIT%2FApache-blue.svg)](https://github.com/bevyengine/bevy#license)
[![Crates.io](https://img.shields.io/crates/v/bevy_audio.svg)](https://crates.io/crates/bevy_audio)
[![Downloads](https://img.shields.io/crates/d/bevy_audio.svg)](https://crates.io/crates/bevy_audio)
[![Docs](https://docs.rs/bevy_audio/badge.svg)](https://docs.rs/bevy_audio/latest/bevy_audio/)
[![Discord](https://img.shields.io/discord/691052431525675048.svg?label=&logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/bevy)

247
vendor/bevy_audio/src/audio.rs vendored Normal file
View File

@@ -0,0 +1,247 @@
use crate::{AudioSource, Decodable, Volume};
use bevy_asset::{Asset, Handle};
use bevy_ecs::prelude::*;
use bevy_math::Vec3;
use bevy_reflect::prelude::*;
/// The way Bevy manages the sound playback.
#[derive(Debug, Clone, Copy, Reflect)]
#[reflect(Clone)]
pub enum PlaybackMode {
/// Play the sound once. Do nothing when it ends.
///
/// Note: It is not possible to reuse an `AudioPlayer` after it has finished playing and
/// the underlying `AudioSink` or `SpatialAudioSink` has been drained.
///
/// To replay a sound, the audio components provided by `AudioPlayer` must be removed and
/// added again.
Once,
/// Repeat the sound forever.
Loop,
/// Despawn the entity and its children when the sound finishes playing.
Despawn,
/// Remove the audio components from the entity, when the sound finishes playing.
Remove,
}
/// Initial settings to be used when audio starts playing.
///
/// If you would like to control the audio while it is playing, query for the
/// [`AudioSink`][crate::AudioSink] or [`SpatialAudioSink`][crate::SpatialAudioSink]
/// components. Changes to this component will *not* be applied to already-playing audio.
#[derive(Component, Clone, Copy, Debug, Reflect)]
#[reflect(Clone, Default, Component, Debug)]
pub struct PlaybackSettings {
/// The desired playback behavior.
pub mode: PlaybackMode,
/// Volume to play at.
pub volume: Volume,
/// Speed to play at.
pub speed: f32,
/// Create the sink in paused state.
/// Useful for "deferred playback", if you want to prepare
/// the entity, but hear the sound later.
pub paused: bool,
/// Whether to create the sink in muted state or not.
///
/// This is useful for audio that should be initially muted. You can still
/// set the initial volume and it is applied when the audio is unmuted.
pub muted: bool,
/// Enables spatial audio for this source.
///
/// See also: [`SpatialListener`].
///
/// Note: Bevy does not currently support HRTF or any other high-quality 3D sound rendering
/// features. Spatial audio is implemented via simple left-right stereo panning.
pub spatial: bool,
/// Optional scale factor applied to the positions of this audio source and the listener,
/// overriding the default value configured on [`AudioPlugin::default_spatial_scale`](crate::AudioPlugin::default_spatial_scale).
pub spatial_scale: Option<SpatialScale>,
}
impl Default for PlaybackSettings {
fn default() -> Self {
Self::ONCE
}
}
impl PlaybackSettings {
/// Will play the associated audio source once.
///
/// Note: It is not possible to reuse an `AudioPlayer` after it has finished playing and
/// the underlying `AudioSink` or `SpatialAudioSink` has been drained.
///
/// To replay a sound, the audio components provided by `AudioPlayer` must be removed and
/// added again.
pub const ONCE: PlaybackSettings = PlaybackSettings {
mode: PlaybackMode::Once,
volume: Volume::Linear(1.0),
speed: 1.0,
paused: false,
muted: false,
spatial: false,
spatial_scale: None,
};
/// Will play the associated audio source in a loop.
pub const LOOP: PlaybackSettings = PlaybackSettings {
mode: PlaybackMode::Loop,
..PlaybackSettings::ONCE
};
/// Will play the associated audio source once and despawn the entity afterwards.
pub const DESPAWN: PlaybackSettings = PlaybackSettings {
mode: PlaybackMode::Despawn,
..PlaybackSettings::ONCE
};
/// Will play the associated audio source once and remove the audio components afterwards.
pub const REMOVE: PlaybackSettings = PlaybackSettings {
mode: PlaybackMode::Remove,
..PlaybackSettings::ONCE
};
/// Helper to start in a paused state.
pub const fn paused(mut self) -> Self {
self.paused = true;
self
}
/// Helper to start muted.
pub const fn muted(mut self) -> Self {
self.muted = true;
self
}
/// Helper to set the volume from start of playback.
pub const fn with_volume(mut self, volume: Volume) -> Self {
self.volume = volume;
self
}
/// Helper to set the speed from start of playback.
pub const fn with_speed(mut self, speed: f32) -> Self {
self.speed = speed;
self
}
/// Helper to enable or disable spatial audio.
pub const fn with_spatial(mut self, spatial: bool) -> Self {
self.spatial = spatial;
self
}
/// Helper to use a custom spatial scale.
pub const fn with_spatial_scale(mut self, spatial_scale: SpatialScale) -> Self {
self.spatial_scale = Some(spatial_scale);
self
}
}
/// Settings for the listener for spatial audio sources.
///
/// This must be accompanied by `Transform` and `GlobalTransform`.
/// Only one entity with a `SpatialListener` should be present at any given time.
#[derive(Component, Clone, Debug, Reflect)]
#[reflect(Clone, Default, Component, Debug)]
pub struct SpatialListener {
/// Left ear position relative to the `GlobalTransform`.
pub left_ear_offset: Vec3,
/// Right ear position relative to the `GlobalTransform`.
pub right_ear_offset: Vec3,
}
impl Default for SpatialListener {
fn default() -> Self {
Self::new(4.)
}
}
impl SpatialListener {
/// Creates a new `SpatialListener` component.
///
/// `gap` is the distance between the left and right "ears" of the listener. Ears are
/// positioned on the x axis.
pub fn new(gap: f32) -> Self {
SpatialListener {
left_ear_offset: Vec3::X * gap / -2.0,
right_ear_offset: Vec3::X * gap / 2.0,
}
}
}
/// A scale factor applied to the positions of audio sources and listeners for
/// spatial audio.
///
/// Default is `Vec3::ONE`.
#[derive(Clone, Copy, Debug, Reflect)]
#[reflect(Clone, Default)]
pub struct SpatialScale(pub Vec3);
impl SpatialScale {
/// Create a new `SpatialScale` with the same value for all 3 dimensions.
pub const fn new(scale: f32) -> Self {
Self(Vec3::splat(scale))
}
/// Create a new `SpatialScale` with the same value for `x` and `y`, and `0.0`
/// for `z`.
pub const fn new_2d(scale: f32) -> Self {
Self(Vec3::new(scale, scale, 0.0))
}
}
impl Default for SpatialScale {
fn default() -> Self {
Self(Vec3::ONE)
}
}
/// The default scale factor applied to the positions of audio sources and listeners for
/// spatial audio. Can be overridden for individual sounds in [`PlaybackSettings`].
///
/// You may need to adjust this scale to fit your world's units.
///
/// Default is `Vec3::ONE`.
#[derive(Resource, Default, Clone, Copy, Reflect)]
#[reflect(Resource, Default, Clone)]
pub struct DefaultSpatialScale(pub SpatialScale);
/// A component for playing a sound.
///
/// Insert this component onto an entity to trigger an audio source to begin playing.
///
/// If the handle refers to an unavailable asset (such as if it has not finished loading yet),
/// the audio will not begin playing immediately. The audio will play when the asset is ready.
///
/// When Bevy begins the audio playback, an [`AudioSink`][crate::AudioSink] component will be
/// added to the entity. You can use that component to control the audio settings during playback.
///
/// Playback can be configured using the [`PlaybackSettings`] component. Note that changes to the
/// `PlaybackSettings` component will *not* affect already-playing audio.
#[derive(Component, Reflect)]
#[reflect(Component, Clone)]
#[require(PlaybackSettings)]
pub struct AudioPlayer<Source = AudioSource>(pub Handle<Source>)
where
Source: Asset + Decodable;
impl<Source> Clone for AudioPlayer<Source>
where
Source: Asset + Decodable,
{
fn clone(&self) -> Self {
Self(self.0.clone())
}
}
impl AudioPlayer<AudioSource> {
/// Creates a new [`AudioPlayer`] with the given [`Handle<AudioSource>`].
///
/// For convenience reasons, this hard-codes the [`AudioSource`] type. If you want to
/// initialize an [`AudioPlayer`] with a different type, just initialize it directly using normal
/// tuple struct syntax.
pub fn new(source: Handle<AudioSource>) -> Self {
Self(source)
}
}

334
vendor/bevy_audio/src/audio_output.rs vendored Normal file
View File

@@ -0,0 +1,334 @@
use crate::{
AudioPlayer, Decodable, DefaultSpatialScale, GlobalVolume, PlaybackMode, PlaybackSettings,
SpatialAudioSink, SpatialListener,
};
use bevy_asset::{Asset, Assets};
use bevy_ecs::{prelude::*, system::SystemParam};
use bevy_math::Vec3;
use bevy_transform::prelude::GlobalTransform;
use rodio::{OutputStream, OutputStreamHandle, Sink, Source, SpatialSink};
use tracing::warn;
use crate::{AudioSink, AudioSinkPlayback};
/// Used internally to play audio on the current "audio device"
///
/// ## Note
///
/// Initializing this resource will leak [`OutputStream`]
/// using [`std::mem::forget`].
/// This is done to avoid storing this in the struct (and making this `!Send`)
/// while preventing it from dropping (to avoid halting of audio).
///
/// This is fine when initializing this once (as is default when adding this plugin),
/// since the memory cost will be the same.
/// However, repeatedly inserting this resource into the app will **leak more memory**.
#[derive(Resource)]
pub(crate) struct AudioOutput {
stream_handle: Option<OutputStreamHandle>,
}
impl Default for AudioOutput {
fn default() -> Self {
if let Ok((stream, stream_handle)) = OutputStream::try_default() {
// We leak `OutputStream` to prevent the audio from stopping.
core::mem::forget(stream);
Self {
stream_handle: Some(stream_handle),
}
} else {
warn!("No audio device found.");
Self {
stream_handle: None,
}
}
}
}
/// Marker for internal use, to despawn entities when playback finishes.
#[derive(Component, Default)]
pub struct PlaybackDespawnMarker;
/// Marker for internal use, to remove audio components when playback finishes.
#[derive(Component, Default)]
pub struct PlaybackRemoveMarker;
#[derive(SystemParam)]
pub(crate) struct EarPositions<'w, 's> {
pub(crate) query: Query<'w, 's, (Entity, &'static GlobalTransform, &'static SpatialListener)>,
}
impl<'w, 's> EarPositions<'w, 's> {
/// Gets a set of transformed ear positions.
///
/// If there are no listeners, use the default values. If a user has added multiple
/// listeners for whatever reason, we will return the first value.
pub(crate) fn get(&self) -> (Vec3, Vec3) {
let (left_ear, right_ear) = self
.query
.iter()
.next()
.map(|(_, transform, settings)| {
(
transform.transform_point(settings.left_ear_offset),
transform.transform_point(settings.right_ear_offset),
)
})
.unwrap_or_else(|| {
let settings = SpatialListener::default();
(settings.left_ear_offset, settings.right_ear_offset)
});
(left_ear, right_ear)
}
pub(crate) fn multiple_listeners(&self) -> bool {
self.query.iter().len() > 1
}
}
/// Plays "queued" audio through the [`AudioOutput`] resource.
///
/// "Queued" audio is any audio entity (with an [`AudioPlayer`] component) that does not have an
/// [`AudioSink`]/[`SpatialAudioSink`] component.
///
/// This system detects such entities, checks if their source asset
/// data is available, and creates/inserts the sink.
pub(crate) fn play_queued_audio_system<Source: Asset + Decodable>(
audio_output: Res<AudioOutput>,
audio_sources: Res<Assets<Source>>,
global_volume: Res<GlobalVolume>,
query_nonplaying: Query<
(
Entity,
&AudioPlayer<Source>,
&PlaybackSettings,
Option<&GlobalTransform>,
),
(Without<AudioSink>, Without<SpatialAudioSink>),
>,
ear_positions: EarPositions,
default_spatial_scale: Res<DefaultSpatialScale>,
mut commands: Commands,
) where
f32: rodio::cpal::FromSample<Source::DecoderItem>,
{
let Some(stream_handle) = audio_output.stream_handle.as_ref() else {
// audio output unavailable; cannot play sound
return;
};
for (entity, source_handle, settings, maybe_emitter_transform) in &query_nonplaying {
let Some(audio_source) = audio_sources.get(&source_handle.0) else {
continue;
};
// audio data is available (has loaded), begin playback and insert sink component
if settings.spatial {
let (left_ear, right_ear) = ear_positions.get();
// We can only use one `SpatialListener`. If there are more than that, then
// the user may have made a mistake.
if ear_positions.multiple_listeners() {
warn!(
"Multiple SpatialListeners found. Using {}.",
ear_positions.query.iter().next().unwrap().0
);
}
let scale = settings.spatial_scale.unwrap_or(default_spatial_scale.0).0;
let emitter_translation = if let Some(emitter_transform) = maybe_emitter_transform {
(emitter_transform.translation() * scale).into()
} else {
warn!("Spatial AudioPlayer with no GlobalTransform component. Using zero.");
Vec3::ZERO.into()
};
let sink = match SpatialSink::try_new(
stream_handle,
emitter_translation,
(left_ear * scale).into(),
(right_ear * scale).into(),
) {
Ok(sink) => sink,
Err(err) => {
warn!("Error creating spatial sink: {err:?}");
continue;
}
};
match settings.mode {
PlaybackMode::Loop => sink.append(audio_source.decoder().repeat_infinite()),
PlaybackMode::Once | PlaybackMode::Despawn | PlaybackMode::Remove => {
sink.append(audio_source.decoder());
}
};
let mut sink = SpatialAudioSink::new(sink);
if settings.muted {
sink.mute();
}
sink.set_speed(settings.speed);
sink.set_volume(settings.volume * global_volume.volume);
if settings.paused {
sink.pause();
}
match settings.mode {
PlaybackMode::Loop | PlaybackMode::Once => commands.entity(entity).insert(sink),
PlaybackMode::Despawn => commands
.entity(entity)
// PERF: insert as bundle to reduce archetype moves
.insert((sink, PlaybackDespawnMarker)),
PlaybackMode::Remove => commands
.entity(entity)
// PERF: insert as bundle to reduce archetype moves
.insert((sink, PlaybackRemoveMarker)),
};
} else {
let sink = match Sink::try_new(stream_handle) {
Ok(sink) => sink,
Err(err) => {
warn!("Error creating sink: {err:?}");
continue;
}
};
match settings.mode {
PlaybackMode::Loop => sink.append(audio_source.decoder().repeat_infinite()),
PlaybackMode::Once | PlaybackMode::Despawn | PlaybackMode::Remove => {
sink.append(audio_source.decoder());
}
};
let mut sink = AudioSink::new(sink);
if settings.muted {
sink.mute();
}
sink.set_speed(settings.speed);
sink.set_volume(settings.volume * global_volume.volume);
if settings.paused {
sink.pause();
}
match settings.mode {
PlaybackMode::Loop | PlaybackMode::Once => commands.entity(entity).insert(sink),
PlaybackMode::Despawn => commands
.entity(entity)
// PERF: insert as bundle to reduce archetype moves
.insert((sink, PlaybackDespawnMarker)),
PlaybackMode::Remove => commands
.entity(entity)
// PERF: insert as bundle to reduce archetype moves
.insert((sink, PlaybackRemoveMarker)),
};
}
}
}
pub(crate) fn cleanup_finished_audio<T: Decodable + Asset>(
mut commands: Commands,
query_nonspatial_despawn: Query<
(Entity, &AudioSink),
(With<PlaybackDespawnMarker>, With<AudioPlayer<T>>),
>,
query_spatial_despawn: Query<
(Entity, &SpatialAudioSink),
(With<PlaybackDespawnMarker>, With<AudioPlayer<T>>),
>,
query_nonspatial_remove: Query<
(Entity, &AudioSink),
(With<PlaybackRemoveMarker>, With<AudioPlayer<T>>),
>,
query_spatial_remove: Query<
(Entity, &SpatialAudioSink),
(With<PlaybackRemoveMarker>, With<AudioPlayer<T>>),
>,
) {
for (entity, sink) in &query_nonspatial_despawn {
if sink.sink.empty() {
commands.entity(entity).despawn();
}
}
for (entity, sink) in &query_spatial_despawn {
if sink.sink.empty() {
commands.entity(entity).despawn();
}
}
for (entity, sink) in &query_nonspatial_remove {
if sink.sink.empty() {
commands.entity(entity).remove::<(
AudioPlayer<T>,
AudioSink,
PlaybackSettings,
PlaybackRemoveMarker,
)>();
}
}
for (entity, sink) in &query_spatial_remove {
if sink.sink.empty() {
commands.entity(entity).remove::<(
AudioPlayer<T>,
SpatialAudioSink,
PlaybackSettings,
PlaybackRemoveMarker,
)>();
}
}
}
/// Run Condition to only play audio if the audio output is available
pub(crate) fn audio_output_available(audio_output: Res<AudioOutput>) -> bool {
audio_output.stream_handle.is_some()
}
/// Updates spatial audio sinks when emitter positions change.
pub(crate) fn update_emitter_positions(
mut emitters: Query<
(&GlobalTransform, &SpatialAudioSink, &PlaybackSettings),
Or<(Changed<GlobalTransform>, Changed<PlaybackSettings>)>,
>,
default_spatial_scale: Res<DefaultSpatialScale>,
) {
for (transform, sink, settings) in emitters.iter_mut() {
let scale = settings.spatial_scale.unwrap_or(default_spatial_scale.0).0;
let translation = transform.translation() * scale;
sink.set_emitter_position(translation);
}
}
/// Updates spatial audio sink ear positions when spatial listeners change.
pub(crate) fn update_listener_positions(
mut emitters: Query<(&SpatialAudioSink, &PlaybackSettings)>,
changed_listener: Query<
(),
(
Or<(
Changed<SpatialListener>,
Changed<GlobalTransform>,
Changed<PlaybackSettings>,
)>,
With<SpatialListener>,
),
>,
ear_positions: EarPositions,
default_spatial_scale: Res<DefaultSpatialScale>,
) {
if !default_spatial_scale.is_changed() && changed_listener.is_empty() {
return;
}
let (left_ear, right_ear) = ear_positions.get();
for (sink, settings) in emitters.iter_mut() {
let scale = settings.spatial_scale.unwrap_or(default_spatial_scale.0).0;
sink.set_ears_position(left_ear * scale, right_ear * scale);
}
}

119
vendor/bevy_audio/src/audio_source.rs vendored Normal file
View File

@@ -0,0 +1,119 @@
use alloc::sync::Arc;
use bevy_asset::{io::Reader, Asset, AssetLoader, LoadContext};
use bevy_reflect::TypePath;
use std::io::Cursor;
/// A source of audio data
#[derive(Asset, Debug, Clone, TypePath)]
pub struct AudioSource {
/// Raw data of the audio source.
///
/// The data must be one of the file formats supported by Bevy (`wav`, `ogg`, `flac`, or `mp3`).
/// However, support for these file formats is not part of Bevy's [`default feature set`](https://docs.rs/bevy/latest/bevy/index.html#default-features).
/// In order to be able to use these file formats, you will have to enable the appropriate [`optional features`](https://docs.rs/bevy/latest/bevy/index.html#optional-features).
///
/// It is decoded using [`rodio::decoder::Decoder`](https://docs.rs/rodio/latest/rodio/decoder/struct.Decoder.html).
/// The decoder has conditionally compiled methods
/// depending on the features enabled.
/// If the format used is not enabled,
/// then this will panic with an `UnrecognizedFormat` error.
pub bytes: Arc<[u8]>,
}
impl AsRef<[u8]> for AudioSource {
fn as_ref(&self) -> &[u8] {
&self.bytes
}
}
/// Loads files as [`AudioSource`] [`Assets`](bevy_asset::Assets)
///
/// This asset loader supports different audio formats based on the enable Bevy features.
/// The feature `bevy/vorbis` enables loading from `.ogg` files and is enabled by default.
/// Other file endings can be loaded from with additional features:
/// `.mp3` with `bevy/mp3`
/// `.flac` with `bevy/flac`
/// `.wav` with `bevy/wav`
#[derive(Default)]
pub struct AudioLoader;
impl AssetLoader for AudioLoader {
type Asset = AudioSource;
type Settings = ();
type Error = std::io::Error;
async fn load(
&self,
reader: &mut dyn Reader,
_settings: &Self::Settings,
_load_context: &mut LoadContext<'_>,
) -> Result<AudioSource, Self::Error> {
let mut bytes = Vec::new();
reader.read_to_end(&mut bytes).await?;
Ok(AudioSource {
bytes: bytes.into(),
})
}
fn extensions(&self) -> &[&str] {
&[
#[cfg(feature = "mp3")]
"mp3",
#[cfg(feature = "flac")]
"flac",
#[cfg(feature = "wav")]
"wav",
#[cfg(feature = "vorbis")]
"oga",
#[cfg(feature = "vorbis")]
"ogg",
#[cfg(feature = "vorbis")]
"spx",
]
}
}
/// A type implementing this trait can be converted to a [`rodio::Source`] type.
///
/// It must be [`Send`] and [`Sync`] in order to be registered.
/// Types that implement this trait usually contain raw sound data that can be converted into an iterator of samples.
/// This trait is implemented for [`AudioSource`].
/// Check the example [`decodable`](https://github.com/bevyengine/bevy/blob/latest/examples/audio/decodable.rs) for how to implement this trait on a custom type.
pub trait Decodable: Send + Sync + 'static {
/// The type of the audio samples.
/// Usually a [`u16`], [`i16`] or [`f32`], as those implement [`rodio::Sample`].
/// Other types can implement the [`rodio::Sample`] trait as well.
type DecoderItem: rodio::Sample + Send + Sync;
/// The type of the iterator of the audio samples,
/// which iterates over samples of type [`Self::DecoderItem`].
/// Must be a [`rodio::Source`] so that it can provide information on the audio it is iterating over.
type Decoder: rodio::Source + Send + Iterator<Item = Self::DecoderItem>;
/// Build and return a [`Self::Decoder`] of the implementing type
fn decoder(&self) -> Self::Decoder;
}
impl Decodable for AudioSource {
type DecoderItem = <rodio::Decoder<Cursor<AudioSource>> as Iterator>::Item;
type Decoder = rodio::Decoder<Cursor<AudioSource>>;
fn decoder(&self) -> Self::Decoder {
rodio::Decoder::new(Cursor::new(self.clone())).unwrap()
}
}
/// A trait that allows adding a custom audio source to the object.
/// This is implemented for [`App`][bevy_app::App] to allow registering custom [`Decodable`] types.
pub trait AddAudioSource {
/// Registers an audio source.
/// The type must implement [`Decodable`],
/// so that it can be converted to a [`rodio::Source`] type,
/// and [`Asset`], so that it can be registered as an asset.
/// To use this method on [`App`][bevy_app::App],
/// the [audio][super::AudioPlugin] and [asset][bevy_asset::AssetPlugin] plugins must be added first.
fn add_audio_source<T>(&mut self) -> &mut Self
where
T: Decodable + Asset,
f32: rodio::cpal::FromSample<T::DecoderItem>;
}

125
vendor/bevy_audio/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,125 @@
#![forbid(unsafe_code)]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc(
html_logo_url = "https://bevyengine.org/assets/icon.png",
html_favicon_url = "https://bevyengine.org/assets/icon.png"
)]
//! Audio support for the game engine Bevy
//!
//! ```no_run
//! # use bevy_ecs::prelude::*;
//! # use bevy_audio::{AudioPlayer, AudioPlugin, AudioSource, PlaybackSettings};
//! # use bevy_asset::{AssetPlugin, AssetServer};
//! # use bevy_app::{App, AppExit, NoopPluginGroup as MinimalPlugins, Startup};
//! fn main() {
//! App::new()
//! .add_plugins((MinimalPlugins, AssetPlugin::default(), AudioPlugin::default()))
//! .add_systems(Startup, play_background_audio)
//! .run();
//! }
//!
//! fn play_background_audio(asset_server: Res<AssetServer>, mut commands: Commands) {
//! commands.spawn((
//! AudioPlayer::new(asset_server.load("background_audio.ogg")),
//! PlaybackSettings::LOOP,
//! ));
//! }
//! ```
extern crate alloc;
mod audio;
mod audio_output;
mod audio_source;
mod pitch;
mod sinks;
mod volume;
/// The audio prelude.
///
/// This includes the most common types in this crate, re-exported for your convenience.
pub mod prelude {
#[doc(hidden)]
pub use crate::{
AudioPlayer, AudioSink, AudioSinkPlayback, AudioSource, Decodable, GlobalVolume, Pitch,
PlaybackSettings, SpatialAudioSink, SpatialListener,
};
}
pub use audio::*;
pub use audio_source::*;
pub use pitch::*;
pub use volume::*;
pub use rodio::{cpal::Sample as CpalSample, source::Source, Sample};
pub use sinks::*;
use bevy_app::prelude::*;
use bevy_asset::{Asset, AssetApp};
use bevy_ecs::prelude::*;
use bevy_transform::TransformSystem;
use audio_output::*;
/// Set for the audio playback systems, so they can share a run condition
#[derive(SystemSet, Debug, Default, Clone, Copy, PartialEq, Eq, Hash)]
struct AudioPlaySet;
/// Adds support for audio playback to a Bevy Application
///
/// Insert an [`AudioPlayer`] onto your entities to play audio.
#[derive(Default)]
pub struct AudioPlugin {
/// The global volume for all audio entities.
pub global_volume: GlobalVolume,
/// The scale factor applied to the positions of audio sources and listeners for
/// spatial audio.
pub default_spatial_scale: SpatialScale,
}
impl Plugin for AudioPlugin {
fn build(&self, app: &mut App) {
app.register_type::<Volume>()
.register_type::<GlobalVolume>()
.register_type::<SpatialListener>()
.register_type::<DefaultSpatialScale>()
.register_type::<PlaybackMode>()
.register_type::<PlaybackSettings>()
.insert_resource(self.global_volume)
.insert_resource(DefaultSpatialScale(self.default_spatial_scale))
.configure_sets(
PostUpdate,
AudioPlaySet
.run_if(audio_output_available)
.after(TransformSystem::TransformPropagate), // For spatial audio transforms
)
.add_systems(
PostUpdate,
(update_emitter_positions, update_listener_positions).in_set(AudioPlaySet),
)
.init_resource::<AudioOutput>();
#[cfg(any(feature = "mp3", feature = "flac", feature = "wav", feature = "vorbis"))]
{
app.add_audio_source::<AudioSource>();
app.init_asset_loader::<AudioLoader>();
}
app.add_audio_source::<Pitch>();
}
}
impl AddAudioSource for App {
fn add_audio_source<T>(&mut self) -> &mut Self
where
T: Decodable + Asset,
f32: rodio::cpal::FromSample<T::DecoderItem>,
{
self.init_asset::<T>().add_systems(
PostUpdate,
(play_queued_audio_system::<T>, cleanup_finished_audio::<T>).in_set(AudioPlaySet),
);
self
}
}

35
vendor/bevy_audio/src/pitch.rs vendored Normal file
View File

@@ -0,0 +1,35 @@
use crate::Decodable;
use bevy_asset::Asset;
use bevy_reflect::TypePath;
use rodio::{
source::{SineWave, TakeDuration},
Source,
};
/// A source of sine wave sound
#[derive(Asset, Debug, Clone, TypePath)]
pub struct Pitch {
/// Frequency at which sound will be played
pub frequency: f32,
/// Duration for which sound will be played
pub duration: core::time::Duration,
}
impl Pitch {
/// Creates a new note
pub fn new(frequency: f32, duration: core::time::Duration) -> Self {
Pitch {
frequency,
duration,
}
}
}
impl Decodable for Pitch {
type DecoderItem = <SineWave as Iterator>::Item;
type Decoder = TakeDuration<SineWave>;
fn decoder(&self) -> Self::Decoder {
SineWave::new(self.frequency).take_duration(self.duration)
}
}

374
vendor/bevy_audio/src/sinks.rs vendored Normal file
View File

@@ -0,0 +1,374 @@
use bevy_ecs::component::Component;
use bevy_math::Vec3;
use bevy_transform::prelude::Transform;
use rodio::{Sink, SpatialSink};
use crate::Volume;
/// Common interactions with an audio sink.
pub trait AudioSinkPlayback {
/// Gets the volume of the sound as a [`Volume`].
///
/// If the sink is muted, this returns the managed volume rather than the
/// sink's actual volume. This allows you to use the returned volume as if
/// the sink were not muted, because a muted sink has a physical volume of
/// 0.
fn volume(&self) -> Volume;
/// Changes the volume of the sound to the given [`Volume`].
///
/// If the sink is muted, changing the volume won't unmute it, i.e. the
/// sink's volume will remain "off" / "muted". However, the sink will
/// remember the volume change and it will be used when
/// [`unmute`](Self::unmute) is called. This allows you to control the
/// volume even when the sink is muted.
fn set_volume(&mut self, volume: Volume);
/// Gets the speed of the sound.
///
/// The value `1.0` is the "normal" speed (unfiltered input). Any value other than `1.0`
/// will change the play speed of the sound.
fn speed(&self) -> f32;
/// Changes the speed of the sound.
///
/// The value `1.0` is the "normal" speed (unfiltered input). Any value other than `1.0`
/// will change the play speed of the sound.
fn set_speed(&self, speed: f32);
/// Resumes playback of a paused sink.
///
/// No effect if not paused.
fn play(&self);
/// Pauses playback of this sink.
///
/// No effect if already paused.
/// A paused sink can be resumed with [`play`](Self::play).
fn pause(&self);
/// Toggles playback of the sink.
///
/// If the sink is paused, toggling playback resumes it. If the sink is
/// playing, toggling playback pauses it.
fn toggle_playback(&self) {
if self.is_paused() {
self.play();
} else {
self.pause();
}
}
/// Returns true if the sink is paused.
///
/// Sinks can be paused and resumed using [`pause`](Self::pause) and [`play`](Self::play).
fn is_paused(&self) -> bool;
/// Stops the sink.
///
/// It won't be possible to restart it afterwards.
fn stop(&self);
/// Returns true if this sink has no more sounds to play.
fn empty(&self) -> bool;
/// Returns true if the sink is muted.
fn is_muted(&self) -> bool;
/// Mutes the sink.
///
/// Muting a sink sets the volume to 0. Use [`unmute`](Self::unmute) to
/// unmute the sink and restore the original volume.
fn mute(&mut self);
/// Unmutes the sink.
///
/// Restores the volume to the value it was before it was muted.
fn unmute(&mut self);
/// Toggles whether the sink is muted or not.
fn toggle_mute(&mut self) {
if self.is_muted() {
self.unmute();
} else {
self.mute();
}
}
}
/// Used to control audio during playback.
///
/// Bevy inserts this component onto your entities when it begins playing an audio source.
/// Use [`AudioPlayer`][crate::AudioPlayer] to trigger that to happen.
///
/// You can use this component to modify the playback settings while the audio is playing.
///
/// If this component is removed from an entity, and an [`AudioSource`][crate::AudioSource] is
/// attached to that entity, that [`AudioSource`][crate::AudioSource] will start playing. If
/// that source is unchanged, that translates to the audio restarting.
#[derive(Component)]
pub struct AudioSink {
pub(crate) sink: Sink,
/// Managed volume allows the sink to be muted without losing the user's
/// intended volume setting.
///
/// This is used to restore the volume when [`unmute`](Self::unmute) is
/// called.
///
/// If the sink is not muted, this is `None`.
///
/// If the sink is muted, this is `Some(volume)` where `volume` is the
/// user's intended volume setting, even if the underlying sink's volume is
/// 0.
pub(crate) managed_volume: Option<Volume>,
}
impl AudioSink {
/// Create a new audio sink.
pub fn new(sink: Sink) -> Self {
Self {
sink,
managed_volume: None,
}
}
}
impl AudioSinkPlayback for AudioSink {
fn volume(&self) -> Volume {
self.managed_volume
.unwrap_or_else(|| Volume::Linear(self.sink.volume()))
}
fn set_volume(&mut self, volume: Volume) {
if self.is_muted() {
self.managed_volume = Some(volume);
} else {
self.sink.set_volume(volume.to_linear());
}
}
fn speed(&self) -> f32 {
self.sink.speed()
}
fn set_speed(&self, speed: f32) {
self.sink.set_speed(speed);
}
fn play(&self) {
self.sink.play();
}
fn pause(&self) {
self.sink.pause();
}
fn is_paused(&self) -> bool {
self.sink.is_paused()
}
fn stop(&self) {
self.sink.stop();
}
fn empty(&self) -> bool {
self.sink.empty()
}
fn is_muted(&self) -> bool {
self.managed_volume.is_some()
}
fn mute(&mut self) {
self.managed_volume = Some(self.volume());
self.sink.set_volume(0.0);
}
fn unmute(&mut self) {
if let Some(volume) = self.managed_volume.take() {
self.sink.set_volume(volume.to_linear());
}
}
}
/// Used to control spatial audio during playback.
///
/// Bevy inserts this component onto your entities when it begins playing an audio source
/// that's configured to use spatial audio.
///
/// You can use this component to modify the playback settings while the audio is playing.
///
/// If this component is removed from an entity, and a [`AudioSource`][crate::AudioSource] is
/// attached to that entity, that [`AudioSource`][crate::AudioSource] will start playing. If
/// that source is unchanged, that translates to the audio restarting.
#[derive(Component)]
pub struct SpatialAudioSink {
pub(crate) sink: SpatialSink,
/// Managed volume allows the sink to be muted without losing the user's
/// intended volume setting.
///
/// This is used to restore the volume when [`unmute`](Self::unmute) is
/// called.
///
/// If the sink is not muted, this is `None`.
///
/// If the sink is muted, this is `Some(volume)` where `volume` is the
/// user's intended volume setting, even if the underlying sink's volume is
/// 0.
pub(crate) managed_volume: Option<Volume>,
}
impl SpatialAudioSink {
/// Create a new spatial audio sink.
pub fn new(sink: SpatialSink) -> Self {
Self {
sink,
managed_volume: None,
}
}
}
impl AudioSinkPlayback for SpatialAudioSink {
fn volume(&self) -> Volume {
self.managed_volume
.unwrap_or_else(|| Volume::Linear(self.sink.volume()))
}
fn set_volume(&mut self, volume: Volume) {
if self.is_muted() {
self.managed_volume = Some(volume);
} else {
self.sink.set_volume(volume.to_linear());
}
}
fn speed(&self) -> f32 {
self.sink.speed()
}
fn set_speed(&self, speed: f32) {
self.sink.set_speed(speed);
}
fn play(&self) {
self.sink.play();
}
fn pause(&self) {
self.sink.pause();
}
fn is_paused(&self) -> bool {
self.sink.is_paused()
}
fn stop(&self) {
self.sink.stop();
}
fn empty(&self) -> bool {
self.sink.empty()
}
fn is_muted(&self) -> bool {
self.managed_volume.is_some()
}
fn mute(&mut self) {
self.managed_volume = Some(self.volume());
self.sink.set_volume(0.0);
}
fn unmute(&mut self) {
if let Some(volume) = self.managed_volume.take() {
self.sink.set_volume(volume.to_linear());
}
}
}
impl SpatialAudioSink {
/// Set the two ears position.
pub fn set_ears_position(&self, left_position: Vec3, right_position: Vec3) {
self.sink.set_left_ear_position(left_position.to_array());
self.sink.set_right_ear_position(right_position.to_array());
}
/// Set the listener position, with an ear on each side separated by `gap`.
pub fn set_listener_position(&self, position: Transform, gap: f32) {
self.set_ears_position(
position.translation + position.left() * gap / 2.0,
position.translation + position.right() * gap / 2.0,
);
}
/// Set the emitter position.
pub fn set_emitter_position(&self, position: Vec3) {
self.sink.set_emitter_position(position.to_array());
}
}
#[cfg(test)]
mod tests {
use rodio::Sink;
use super::*;
fn test_audio_sink_playback<T: AudioSinkPlayback>(mut audio_sink: T) {
// Test volume
assert_eq!(audio_sink.volume(), Volume::Linear(1.0)); // default volume
audio_sink.set_volume(Volume::Linear(0.5));
assert_eq!(audio_sink.volume(), Volume::Linear(0.5));
audio_sink.set_volume(Volume::Linear(1.0));
assert_eq!(audio_sink.volume(), Volume::Linear(1.0));
// Test speed
assert_eq!(audio_sink.speed(), 1.0); // default speed
audio_sink.set_speed(0.5);
assert_eq!(audio_sink.speed(), 0.5);
audio_sink.set_speed(1.0);
assert_eq!(audio_sink.speed(), 1.0);
// Test playback
assert!(!audio_sink.is_paused()); // default pause state
audio_sink.pause();
assert!(audio_sink.is_paused());
audio_sink.play();
assert!(!audio_sink.is_paused());
// Test toggle playback
audio_sink.pause(); // start paused
audio_sink.toggle_playback();
assert!(!audio_sink.is_paused());
audio_sink.toggle_playback();
assert!(audio_sink.is_paused());
// Test mute
assert!(!audio_sink.is_muted()); // default mute state
audio_sink.mute();
assert!(audio_sink.is_muted());
audio_sink.unmute();
assert!(!audio_sink.is_muted());
// Test volume with mute
audio_sink.set_volume(Volume::Linear(0.5));
audio_sink.mute();
assert_eq!(audio_sink.volume(), Volume::Linear(0.5)); // returns managed volume even though sink volume is 0
audio_sink.unmute();
assert_eq!(audio_sink.volume(), Volume::Linear(0.5)); // managed volume is restored
// Test toggle mute
audio_sink.toggle_mute();
assert!(audio_sink.is_muted());
audio_sink.toggle_mute();
assert!(!audio_sink.is_muted());
}
#[test]
fn test_audio_sink() {
let (sink, _queue_rx) = Sink::new_idle();
let audio_sink = AudioSink::new(sink);
test_audio_sink_playback(audio_sink);
}
}

504
vendor/bevy_audio/src/volume.rs vendored Normal file
View File

@@ -0,0 +1,504 @@
use bevy_ecs::prelude::*;
use bevy_math::ops;
use bevy_reflect::prelude::*;
/// Use this [`Resource`] to control the global volume of all audio.
///
/// Note: Changing [`GlobalVolume`] does not affect already playing audio.
#[derive(Resource, Debug, Default, Clone, Copy, Reflect)]
#[reflect(Resource, Debug, Default, Clone)]
pub struct GlobalVolume {
/// The global volume of all audio.
pub volume: Volume,
}
impl From<Volume> for GlobalVolume {
fn from(volume: Volume) -> Self {
Self { volume }
}
}
impl GlobalVolume {
/// Create a new [`GlobalVolume`] with the given volume.
pub fn new(volume: Volume) -> Self {
Self { volume }
}
}
/// A [`Volume`] represents an audio source's volume level.
///
/// To create a new [`Volume`] from a linear scale value, use
/// [`Volume::Linear`].
///
/// To create a new [`Volume`] from decibels, use [`Volume::Decibels`].
#[derive(Clone, Copy, Debug, Reflect)]
#[reflect(Clone, Debug, PartialEq)]
pub enum Volume {
/// Create a new [`Volume`] from the given volume in linear scale.
///
/// In a linear scale, the value `1.0` represents the "normal" volume,
/// meaning the audio is played at its original level. Values greater than
/// `1.0` increase the volume, while values between `0.0` and `1.0` decrease
/// the volume. A value of `0.0` effectively mutes the audio.
///
/// # Examples
///
/// ```
/// # use bevy_audio::Volume;
/// # use bevy_math::ops;
/// #
/// # const EPSILON: f32 = 0.01;
///
/// let volume = Volume::Linear(0.5);
/// assert_eq!(volume.to_linear(), 0.5);
/// assert!(ops::abs(volume.to_decibels() - -6.0206) < EPSILON);
///
/// let volume = Volume::Linear(0.0);
/// assert_eq!(volume.to_linear(), 0.0);
/// assert_eq!(volume.to_decibels(), f32::NEG_INFINITY);
///
/// let volume = Volume::Linear(1.0);
/// assert_eq!(volume.to_linear(), 1.0);
/// assert!(ops::abs(volume.to_decibels() - 0.0) < EPSILON);
/// ```
Linear(f32),
/// Create a new [`Volume`] from the given volume in decibels.
///
/// In a decibel scale, the value `0.0` represents the "normal" volume,
/// meaning the audio is played at its original level. Values greater than
/// `0.0` increase the volume, while values less than `0.0` decrease the
/// volume. A value of [`f32::NEG_INFINITY`] decibels effectively mutes the
/// audio.
///
/// # Examples
///
/// ```
/// # use bevy_audio::Volume;
/// # use bevy_math::ops;
/// #
/// # const EPSILON: f32 = 0.01;
///
/// let volume = Volume::Decibels(-5.998);
/// assert!(ops::abs(volume.to_linear() - 0.5) < EPSILON);
///
/// let volume = Volume::Decibels(f32::NEG_INFINITY);
/// assert_eq!(volume.to_linear(), 0.0);
///
/// let volume = Volume::Decibels(0.0);
/// assert_eq!(volume.to_linear(), 1.0);
///
/// let volume = Volume::Decibels(20.0);
/// assert_eq!(volume.to_linear(), 10.0);
/// ```
Decibels(f32),
}
impl Default for Volume {
fn default() -> Self {
Self::Linear(1.0)
}
}
impl PartialEq for Volume {
fn eq(&self, other: &Self) -> bool {
use Volume::{Decibels, Linear};
match (self, other) {
(Linear(a), Linear(b)) => a.abs() == b.abs(),
(Decibels(a), Decibels(b)) => a == b,
(a, b) => a.to_decibels() == b.to_decibels(),
}
}
}
impl PartialOrd for Volume {
fn partial_cmp(&self, other: &Self) -> Option<core::cmp::Ordering> {
use Volume::{Decibels, Linear};
Some(match (self, other) {
(Linear(a), Linear(b)) => a.abs().total_cmp(&b.abs()),
(Decibels(a), Decibels(b)) => a.total_cmp(b),
(a, b) => a.to_decibels().total_cmp(&b.to_decibels()),
})
}
}
#[inline]
fn decibels_to_linear(decibels: f32) -> f32 {
ops::powf(10.0f32, decibels / 20.0)
}
#[inline]
fn linear_to_decibels(linear: f32) -> f32 {
20.0 * ops::log10(linear.abs())
}
impl Volume {
/// Returns the volume in linear scale as a float.
pub fn to_linear(&self) -> f32 {
match self {
Self::Linear(v) => v.abs(),
Self::Decibels(v) => decibels_to_linear(*v),
}
}
/// Returns the volume in decibels as a float.
///
/// If the volume is silent / off / muted, i.e. it's underlying linear scale
/// is `0.0`, this method returns negative infinity.
pub fn to_decibels(&self) -> f32 {
match self {
Self::Linear(v) => linear_to_decibels(*v),
Self::Decibels(v) => *v,
}
}
/// The silent volume. Also known as "off" or "muted".
pub const SILENT: Self = Volume::Linear(0.0);
}
impl core::ops::Add<Self> for Volume {
type Output = Self;
fn add(self, rhs: Self) -> Self {
use Volume::{Decibels, Linear};
match (self, rhs) {
(Linear(a), Linear(b)) => Linear(a + b),
(Decibels(a), Decibels(b)) => Decibels(linear_to_decibels(
decibels_to_linear(a) + decibels_to_linear(b),
)),
// {Linear, Decibels} favors the left hand side of the operation by
// first converting the right hand side to the same type as the left
// hand side and then performing the operation.
(Linear(..), Decibels(db)) => self + Linear(decibels_to_linear(db)),
(Decibels(..), Linear(l)) => self + Decibels(linear_to_decibels(l)),
}
}
}
impl core::ops::AddAssign<Self> for Volume {
fn add_assign(&mut self, rhs: Self) {
*self = *self + rhs;
}
}
impl core::ops::Sub<Self> for Volume {
type Output = Self;
fn sub(self, rhs: Self) -> Self {
use Volume::{Decibels, Linear};
match (self, rhs) {
(Linear(a), Linear(b)) => Linear(a - b),
(Decibels(a), Decibels(b)) => Decibels(linear_to_decibels(
decibels_to_linear(a) - decibels_to_linear(b),
)),
// {Linear, Decibels} favors the left hand side of the operation by
// first converting the right hand side to the same type as the left
// hand side and then performing the operation.
(Linear(..), Decibels(db)) => self - Linear(decibels_to_linear(db)),
(Decibels(..), Linear(l)) => self - Decibels(linear_to_decibels(l)),
}
}
}
impl core::ops::SubAssign<Self> for Volume {
fn sub_assign(&mut self, rhs: Self) {
*self = *self - rhs;
}
}
impl core::ops::Mul<Self> for Volume {
type Output = Self;
fn mul(self, rhs: Self) -> Self {
use Volume::{Decibels, Linear};
match (self, rhs) {
(Linear(a), Linear(b)) => Linear(a * b),
(Decibels(a), Decibels(b)) => Decibels(a + b),
// {Linear, Decibels} favors the left hand side of the operation by
// first converting the right hand side to the same type as the left
// hand side and then performing the operation.
(Linear(..), Decibels(db)) => self * Linear(decibels_to_linear(db)),
(Decibels(..), Linear(l)) => self * Decibels(linear_to_decibels(l)),
}
}
}
impl core::ops::MulAssign<Self> for Volume {
fn mul_assign(&mut self, rhs: Self) {
*self = *self * rhs;
}
}
impl core::ops::Div<Self> for Volume {
type Output = Self;
fn div(self, rhs: Self) -> Self {
use Volume::{Decibels, Linear};
match (self, rhs) {
(Linear(a), Linear(b)) => Linear(a / b),
(Decibels(a), Decibels(b)) => Decibels(a - b),
// {Linear, Decibels} favors the left hand side of the operation by
// first converting the right hand side to the same type as the left
// hand side and then performing the operation.
(Linear(..), Decibels(db)) => self / Linear(decibels_to_linear(db)),
(Decibels(..), Linear(l)) => self / Decibels(linear_to_decibels(l)),
}
}
}
impl core::ops::DivAssign<Self> for Volume {
fn div_assign(&mut self, rhs: Self) {
*self = *self / rhs;
}
}
#[cfg(test)]
mod tests {
use super::Volume::{self, Decibels, Linear};
/// Based on [Wikipedia's Decibel article].
///
/// [Wikipedia's Decibel article]: https://web.archive.org/web/20230810185300/https://en.wikipedia.org/wiki/Decibel
const DECIBELS_LINEAR_TABLE: [(f32, f32); 27] = [
(100., 100000.),
(90., 31623.),
(80., 10000.),
(70., 3162.),
(60., 1000.),
(50., 316.2),
(40., 100.),
(30., 31.62),
(20., 10.),
(10., 3.162),
(5.998, 1.995),
(3.003, 1.413),
(1.002, 1.122),
(0., 1.),
(-1.002, 0.891),
(-3.003, 0.708),
(-5.998, 0.501),
(-10., 0.3162),
(-20., 0.1),
(-30., 0.03162),
(-40., 0.01),
(-50., 0.003162),
(-60., 0.001),
(-70., 0.0003162),
(-80., 0.0001),
(-90., 0.00003162),
(-100., 0.00001),
];
#[test]
fn volume_conversion() {
for (db, linear) in DECIBELS_LINEAR_TABLE {
for volume in [Linear(linear), Decibels(db), Linear(-linear)] {
let db_test = volume.to_decibels();
let linear_test = volume.to_linear();
let db_delta = db_test - db;
let linear_relative_delta = (linear_test - linear) / linear;
assert!(
db_delta.abs() < 1e-2,
"Expected ~{}dB, got {}dB (delta {})",
db,
db_test,
db_delta
);
assert!(
linear_relative_delta.abs() < 1e-3,
"Expected ~{}, got {} (relative delta {})",
linear,
linear_test,
linear_relative_delta
);
}
}
}
#[test]
fn volume_conversion_special() {
assert!(
Decibels(f32::INFINITY).to_linear().is_infinite(),
"Infinite decibels is equivalent to infinite linear scale"
);
assert!(
Linear(f32::INFINITY).to_decibels().is_infinite(),
"Infinite linear scale is equivalent to infinite decibels"
);
assert!(
Linear(f32::NEG_INFINITY).to_decibels().is_infinite(),
"Negative infinite linear scale is equivalent to infinite decibels"
);
assert!(
Decibels(f32::NEG_INFINITY).to_linear().abs() == 0.0,
"Negative infinity decibels is equivalent to zero linear scale"
);
assert!(
Linear(0.0).to_decibels().is_infinite(),
"Zero linear scale is equivalent to negative infinity decibels"
);
assert!(
Linear(-0.0).to_decibels().is_infinite(),
"Negative zero linear scale is equivalent to negative infinity decibels"
);
assert!(
Decibels(f32::NAN).to_linear().is_nan(),
"NaN decibels is equivalent to NaN linear scale"
);
assert!(
Linear(f32::NAN).to_decibels().is_nan(),
"NaN linear scale is equivalent to NaN decibels"
);
}
fn assert_approx_eq(a: Volume, b: Volume) {
const EPSILON: f32 = 0.0001;
match (a, b) {
(Decibels(a), Decibels(b)) | (Linear(a), Linear(b)) => assert!(
(a - b).abs() < EPSILON,
"Expected {:?} to be approximately equal to {:?}",
a,
b
),
(a, b) => assert!(
(a.to_decibels() - b.to_decibels()).abs() < EPSILON,
"Expected {:?} to be approximately equal to {:?}",
a,
b
),
}
}
#[test]
fn volume_ops_add() {
// Linear to Linear.
assert_approx_eq(Linear(0.5) + Linear(0.5), Linear(1.0));
assert_approx_eq(Linear(0.5) + Linear(0.1), Linear(0.6));
assert_approx_eq(Linear(0.5) + Linear(-0.5), Linear(0.0));
// Decibels to Decibels.
assert_approx_eq(Decibels(0.0) + Decibels(0.0), Decibels(6.0206003));
assert_approx_eq(Decibels(6.0) + Decibels(6.0), Decibels(12.020599));
assert_approx_eq(Decibels(-6.0) + Decibels(-6.0), Decibels(0.020599423));
// {Linear, Decibels} favors the left hand side of the operation.
assert_approx_eq(Linear(0.5) + Decibels(0.0), Linear(1.5));
assert_approx_eq(Decibels(0.0) + Linear(0.5), Decibels(3.521825));
}
#[test]
fn volume_ops_add_assign() {
// Linear to Linear.
let mut volume = Linear(0.5);
volume += Linear(0.5);
assert_approx_eq(volume, Linear(1.0));
}
#[test]
fn volume_ops_sub() {
// Linear to Linear.
assert_approx_eq(Linear(0.5) - Linear(0.5), Linear(0.0));
assert_approx_eq(Linear(0.5) - Linear(0.1), Linear(0.4));
assert_approx_eq(Linear(0.5) - Linear(-0.5), Linear(1.0));
// Decibels to Decibels.
assert_eq!(Decibels(0.0) - Decibels(0.0), Decibels(f32::NEG_INFINITY));
assert_approx_eq(Decibels(6.0) - Decibels(4.0), Decibels(-7.736506));
assert_eq!(Decibels(-6.0) - Decibels(-6.0), Decibels(f32::NEG_INFINITY));
}
#[test]
fn volume_ops_sub_assign() {
// Linear to Linear.
let mut volume = Linear(0.5);
volume -= Linear(0.5);
assert_approx_eq(volume, Linear(0.0));
}
#[test]
fn volume_ops_mul() {
// Linear to Linear.
assert_approx_eq(Linear(0.5) * Linear(0.5), Linear(0.25));
assert_approx_eq(Linear(0.5) * Linear(0.1), Linear(0.05));
assert_approx_eq(Linear(0.5) * Linear(-0.5), Linear(-0.25));
// Decibels to Decibels.
assert_approx_eq(Decibels(0.0) * Decibels(0.0), Decibels(0.0));
assert_approx_eq(Decibels(6.0) * Decibels(6.0), Decibels(12.0));
assert_approx_eq(Decibels(-6.0) * Decibels(-6.0), Decibels(-12.0));
// {Linear, Decibels} favors the left hand side of the operation.
assert_approx_eq(Linear(0.5) * Decibels(0.0), Linear(0.5));
assert_approx_eq(Decibels(0.0) * Linear(0.501), Decibels(-6.003246));
}
#[test]
fn volume_ops_mul_assign() {
// Linear to Linear.
let mut volume = Linear(0.5);
volume *= Linear(0.5);
assert_approx_eq(volume, Linear(0.25));
// Decibels to Decibels.
let mut volume = Decibels(6.0);
volume *= Decibels(6.0);
assert_approx_eq(volume, Decibels(12.0));
// {Linear, Decibels} favors the left hand side of the operation.
let mut volume = Linear(0.5);
volume *= Decibels(0.0);
assert_approx_eq(volume, Linear(0.5));
let mut volume = Decibels(0.0);
volume *= Linear(0.501);
assert_approx_eq(volume, Decibels(-6.003246));
}
#[test]
fn volume_ops_div() {
// Linear to Linear.
assert_approx_eq(Linear(0.5) / Linear(0.5), Linear(1.0));
assert_approx_eq(Linear(0.5) / Linear(0.1), Linear(5.0));
assert_approx_eq(Linear(0.5) / Linear(-0.5), Linear(-1.0));
// Decibels to Decibels.
assert_approx_eq(Decibels(0.0) / Decibels(0.0), Decibels(0.0));
assert_approx_eq(Decibels(6.0) / Decibels(6.0), Decibels(0.0));
assert_approx_eq(Decibels(-6.0) / Decibels(-6.0), Decibels(0.0));
// {Linear, Decibels} favors the left hand side of the operation.
assert_approx_eq(Linear(0.5) / Decibels(0.0), Linear(0.5));
assert_approx_eq(Decibels(0.0) / Linear(0.501), Decibels(6.003246));
}
#[test]
fn volume_ops_div_assign() {
// Linear to Linear.
let mut volume = Linear(0.5);
volume /= Linear(0.5);
assert_approx_eq(volume, Linear(1.0));
// Decibels to Decibels.
let mut volume = Decibels(6.0);
volume /= Decibels(6.0);
assert_approx_eq(volume, Decibels(0.0));
// {Linear, Decibels} favors the left hand side of the operation.
let mut volume = Linear(0.5);
volume /= Decibels(0.0);
assert_approx_eq(volume, Linear(0.5));
let mut volume = Decibels(0.0);
volume /= Linear(0.501);
assert_approx_eq(volume, Decibels(6.003246));
}
}