It is the strongest Python module of the acoustic system that supports all kinds of acoustic signal processing such as room acoustics, beamforming, sound source direction estimation, and sound source separation.
[Python / PyRoom Acoustics] Blind separation with ILRMA
PyRoom Acoustics makes it easy to simulate room acoustics. The simulation is performed with Image Source, and it seems that you can also experiment with a hybrid simulator using Image Source and Ray Tracing.
You can easily try the room acoustic simulation in 4 steps.
module.py
import pyroomacoustics as pra
import matplotlib.pyplot as plt
from IPython.display import display, Audio
The room object allows you to set up a 2D or 3D room.
Use the ShoeBox
method to create a typical rectangular parallelepiped room.
max_order
refers to the upper limit of the number of reflections in the image method.
room.py
#Reverberation time and room dimensions
rt60 = 0.5 # seconds
room_dim = [9, 7.5, 3.5] #meters If you make this two-dimensional, it will be a room with a quadratic plane.
#Determine the average sound absorption coefficient of the wall surface and the upper limit of the number of reflections by the mirror image method from the reverberation formula of Sabine.
e_absorption, max_order = pra.inverse_sabine(rt60, room_dim)
#Make a room
#fs is the sampling frequency of the generated impulse response. If there is a sound source to input, match it.
room = pra.ShoeBox(
room_dim, fs=16000, materials=pra.Material(e_absorption), max_order=max_order
)
You can view the room with room.plot ()
.
plot.py
fig, ax = room.plot()
Even if you don't know the reverberation time or the sound absorption coefficient of the wall surface, you can select the material from the Material database. You can define a room.
material.py
m = pra.Material(energy_absorption="hard_surface")
room = pra.ShoeBox([9, 7.5, 3.5], fs=16000, materials=m, max_order=17)
You can also set different materials for each wall or ceiling in more detail.
material.py
m = pra.make_materials(
ceiling="hard_surface",
floor="6mm_carpet",
east="brickwork",
west="brickwork",
north="brickwork",
south="brickwork",
)
room = pra.ShoeBox(
[9, 7.5, 3.5], fs=16000, materials=m, max_order=17
)
You can also use Room.from_corners (corners)
to set the room shape from the coordinates.
First, make a plan view.
The arguments are the same as ShueBox
.
corner.py
#Set the coordinates of the corner
corners = np.array([[0,0], [0,3], [5,3], [5,1], [3,1], [3,0]]).T # [x,y]
#Make a room
room = pra.Room.from_corners(corners, materials=pra.Material(e_absorption), fs=16000, t0=0.0, max_order=1, sigma2_awgn=None, sources=None, mics=None, materials=None, **kwargs)
If you can, add height information with ʻextrude` and create a wall.
extrude.py
room.extrude(2.)
Max_order = 3
is recommended for the number of reflections of the mirror image method
raytracing.py
room = pra.ShoeBox(
room_dim, fs=16000, materials=pra.Material(e_absorption), max_order=3,ray_tracing=True
)
#Activate ray tracing
room.set_ray_tracing()
In a huge space such as a concert hall, the sound is absorbed by the air, making it difficult to transmit from the high frequencies.
air_absorption.py
room = pra.ShoeBox(
room_dim, fs=16000, materials=pra.Material(e_absorption), max_order=3,air_absorption=True
)
room.py
pyroomacoustics.room.Room(walls, fs=8000, t0=0.0, max_order=1, sigma2_awgn=None, sources=None, mics=None, temperature=None, humidity=None, air_absorption=False, ray_tracing=False)
You can also add various options such as temperature
(Celsius), humidity
humidity (relative humidity), and t0
simulation start time.
We will add a microphone array to the created room room
. A microphone array means a sound collection system consisting of multiple microphones.
mic.py
#Give the coordinates of the microphone
mic_locs = np.c_[
[6.3, 4.87, 1.2], [6.3, 4.93, 1.2], # mic 1 # mic 2
]
#Add a microphone to the room
room.add_microphone_array(mic_locs)
If you want to arrange the microphones in a circle or in a straight line, there are functions that automatically calculate various microphone arrangements without having to calculate the coordinates. (However, since it only returns the coordinates in 2D, it is necessary to add the coordinates in the height direction and raise it.)
If you enter the location of the center of the microphone array in the room, the number of microphones, the counterclockwise rotation from the x-axis, and the microphone radius as arguments, the (x, y) coordinates of each microphone will be returned. If you want to use it in 3D, add the coordinates in the z-axis direction.
circular_2D_array.py
mic_locs = pra.circular_2D_array(center=[2.,2.], M=6, phi0=0, radius=0.1)
>>>array([[2.1, 2.05, 1.95, 1.9, 1.95,2.05],
[2., 2.08660254, 2.08660254, 2., 1.91339746,1.91339746]])
Besides the circular arrangement linear_2D_array() circular_2D_array() square_2D_array() poisson_2D_array() spiral_2D_array() Please check from here.
mic.py
mic_loc = [1.0,2.0,2.0]
room.add_microphone(mic_loc)
To find out if the coordinates where you are trying to put the microphone or sound source are hot in the room, do the following.
ʻInclude_borderswhether to include on the wall. If so, it returns
True, otherwise it returns
False`.
check_inside.py
p = [1.,2.5,12.2]
room.is_inside(p、include_borders = True )
As with the microphone, give the coordinates and the data of the sound source you want to place in addition to it. The sound source data can be your own signal, or you can input it from your own WAV file.
source.py
#I will read the wav file and place it
from scipy.io import wavfile
_, audio1 = wavfile.read('speech1.wav')
_, audio2 = wavfile.read('speech2.wav')
_, audio3 = wavfile.read('speech3.wav')
#Give coordinate information for each sound source,`room`I will add it to.
#You can optionally add a delay.
room.add_source([2.5, 3.73, 1.76], signal=audio1, delay=1.3)
room.add_source([1.0, 2.34, 2.12], signal=audio2)
room.add_source([3.2, 1.7, 5.2], signal=audio3, delay=2.)
If you add a microphone and a sound source and then room.plot ()
, it will be reflected in the figure.
Once you have the microphone and sound source in place, it's time to run the simulation. That's it for execution.
simulate.py
room.simulate()
If you want to take into account the effect of the SN ratio during execution, do as follows.
simulation.py
# S/N ratio
SNR = 90.
room.simulate(snr=SNR)
The sound that reaches each of the placed microphones can be extracted as follows.
You can get the sampling frequency with room.fs
.
result.py
simulation_data = room.mic_array.signals #Simulation sound source
display(Audio(simulation_data[0,:], rate=room.fs))
You can check the impulse response from all sound sources to the microphone. The impulse response is extracted as follows.
rir.py
impulse_responses =room.compute_rir()
display(Audio(impulse_responses[0][0], rate=room.fs))
rt60.py
rt60 = room.measure_rt60()
print("Reverberation time:{}".format(rt60))
This function can be calculated not only from the simulated impulse response, but also using your own impulse response.
rt60.py
rt60=pra.experimental.measure_rt60(impulse_responses[0][0],fs=rate)
print("Reverberation time:{}".format(rt60))
model.py
# compute image sources
room.image_source_model()
# visualize 3D polyhedron room and image sources
fig, ax = room.plot(img_order=3)
Well, is it like this for the time being? There are countless other options, so check them out. Docs » Room Simulation
Official Documentation Official GitHub
Recommended Posts