The Signal Class inherits from numpy.ndarray via the
audiotoolbox.BaseSignal class:
digraph inheritance7ea72fb99d {
bgcolor=transparent;
rankdir=LR;
size="8.0, 12.0";
"audiotoolbox.oaudio.base_signal.BaseSignal" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Basic Signal class inherited by all Signal representations"];
"numpy.ndarray" -> "audiotoolbox.oaudio.base_signal.BaseSignal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal.Signal" [URL="#audiotoolbox.Signal",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Base class for signals in the timedomain."];
"audiotoolbox.oaudio.base_signal.BaseSignal" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.analysis.AnalysisMixin" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.generation.GenerationMixin" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.modification.ModificationMixin" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.io.IOMixin" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.filtering.FilteringMixin" -> "audiotoolbox.oaudio.signal.Signal" [arrowsize=0.5,style="setlinewidth(0.5)"];
"audiotoolbox.oaudio.signal_mixins.analysis.AnalysisMixin" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Mixin for signal analysis methods."];
"audiotoolbox.oaudio.signal_mixins.filtering.FilteringMixin" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Mixin for signal filtering methods."];
"audiotoolbox.oaudio.signal_mixins.generation.GenerationMixin" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Mixin for signal generation methods."];
"audiotoolbox.oaudio.signal_mixins.io.IOMixin" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Mixin for signal io methods."];
"audiotoolbox.oaudio.signal_mixins.modification.ModificationMixin" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Mixin for signal modification methods."];
"numpy.ndarray" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="ndarray(shape, dtype=float, buffer=None, offset=0,"];
}
As a consequence, numpy.ndarray methods such as x.min(),
x.max(), x.sum(), x.var() and others can also be used on
auditools.Signal objects. For more informations check the numpy docs.
This function multiplies a fade window with a given rise time
onto the signal. for mor information about the indiviual
window functions refer to the implementations:
cos: A rasied cosine window audiotoolbox.cosine_fade_window()
gauss: A gaussian window audiotoolbox.gaussian_fade_window()
Parameters:
rise_time (float) – The rise time in seconds.
type ('cos', 'gauss', 'cos2') – The type of the window. (default = ‘cos’)
add gaussian noise with a defined variance and different
spectral shapes. The noise is generated in the frequency domain
using the gaussian pseudorandom generator numpy.random.randn.
The real and imaginarny part of each frequency component is set
using the psudorandom generator. Each frequency bin is then
weighted dependent on the spectral shape. The resulting spektrum
is then transformed into the time domain using numpy.fft.ifft
Weighting functions:
white: \(w(f) = 1\)
pink: \(w(f) = \frac{1}{\sqrt{f}}\)
brown: \(w(f) = \frac{1}{f}\)
Parameters:
ntype ({'white', 'pink', 'brown'}) – spectral shape of the noise
variance (scalar, optional) – The Variance of the noise
seed (int or 1-d array_like, optional) – Seed for RandomState.
Must be convertible to 32 bit unsigned integers.
This function adds partly uncorrelated noise using the N+1
generator method.
To generate N partly uncorrelated noises with a desired
correlation coefficent of $rho$, the algoritm first generates N+1
noise tokens which are then orthogonalized using the Gram-Schmidt
process (as implementd in numpy.linalg.qr). The N+1 th noise token
is then mixed with the remaining noise tokens using the equation
Convolves the current signal with the given kernel.
This method performs a convolution operation between the current signal
and the provided kernel. The convolution is performed along the
overlapping dimensions of the two signals. E.g., If the signal has two channels
and the kernel has two channels, the first channel of the signal is convolved
with the first channel of the kernel, and the second channel of the signal is
convolved with the second channel of the kernel. The resulting signal will again have
two channels. If overlap_dimensions is False, the convolution is performed
along all dimensions. A Signal with two channels convolved with a two-channel kernel
will result in an output of shape (2, 2) where each channel of the signal is convolved with
each channel of the kernel.
this method uses scipy.Signal.fftconvolve for the convolution.
mode (str {'full', 'valid', 'same'}, optional) – The convolution mode for fftconvolve (default=full)
overlap_dimensions (bool, optional) – Whether to convolve only along overlapping dimensions. If True, the
convolution is performed only along the dimensions that overlap between
the two signals. If False, the convolution is performed along all
dimensions. Defaults to True.
Returns:
The convolved signal.
Return type:
Self
Examples
If the last dimension of signal and the first dimension of kernel match,
convolution takes place along this axis. This means that the first
channel of the signal is convolved with the first channel of the kernel,
the second with the second.
Circular shift the functions foreward to create a certain time
delay relative to the orginal time. E.g if shifted by an
equivalent of N samples, the value at sample i will move to
sample i + N.
Two methods can be used. Using the default method ‘fft’, the
signal is shifted by applyint a FFT transform, and phase
shifting each frequency accoring to the delay and applying an
inverse transform. This is identical to using the
:meth:’audiotoolbox.FrequencyDomainSignal.time_shift’
method. When using the method ‘sample’, the signal is time
delayed by circular shifting the signal by the number of
samples that is closest to delay.
Parameters:
delay (float) – The delay in secons
method ({'fft', 'samples'} optional) – The method used to delay the signal (default: ‘fft’)
This method loads a signal from an audio file and assigns it to the current
Signal object. The signal can be loaded from a specific start point and for
specific channels.
Parameters:
filename (str) – The path to the audio file to load.
start (int, optional) – The starting sample index from which to load the signal. Default is 0.
channels (int, tuple, or str, optional) – The channels to load from the audio file. Can be an integer specifying
a single channel, a tuple specifying multiple channels, or “all” to load
all channels. Default is “all”.
Shifts all frequency components of a signal by a constant phase.
Shift all frequency components of a given signal by a constant
phase. This is identical to calling the phase_shift method of
the FrequencyDomainSignal class.
Parameters:
phase (scalar) – The phase in rad by which the signal is shifted.
Quick playback of the signal over the default audio output device.
Parameters:
block (bool, optional) – If True, the method will block until playback is finished. If False,
playback will be non-blocking and the method will return immediately.
Default is True.
This method uses the resampy library to resample the signal to a new
sampling rate. It is based on the band-limited sinc interpolation method
for sampling rate conversion as described by Smith (2015). [1]_.
This method saves the current signal as an audio file. Additional parameters
for the file format can be specified through keyword arguments. The file can
be saved in any format supported by libsndfile, such as WAV, FLAC, AIFF, etc.
Parameters:
filename (str) – The filename to save the audio file as.
**kwargs – Additional keyword arguments to be passed to the audiotoolbox.wav.writefile
function. These can include format and subtype.
This function adds zeros of a given number or duration to the start or
end of a signal.
If number or duration is a scalar, an equal number of zeros
will be appended at the front and end of the array. If a
vector of two values is given, the first defines the number or
duration at the beginning, the second the number or duration
of zeros at the end.
Parameters:
number (scalar or vecor of len(2), optional) – Number of zeros.
duration (scalar or vecor of len(2), optional) – duration of zeros in seconds.
The Signal.time_frequency submodule gives access to an instance of the audiotoolbox.oaudio.time_frequency.TimeFrequency class that provides time-frequency analysis methods such as spectrograms.