4 ?# S* D9 z3 S) ~4 T' [& D ! ~* P, I# c3 f+ N* r6 `% o& RmcSignals.diffNoise = generateMultichanBabbleNoise(setup.nSamples,setup.nSensors,setup.sensorDistance,... + [, V) B. C7 } setup.speedOfSound,setup.noiseField);" T, h& B, }5 \3 j3 u( Q9 `2 ~
diffNoisePowerMeas = var(mcSignals.diffNoise); " c, Y; C2 r6 L0 H" v+ m) f& K" g8 W- fdiffNoisePowerTrue = cleanSignalPowerMeas/10^(setup.sdnr/10); 8 s; K$ s: [/ M$ H2 w t- GmcSignals.diffNoise = mcSignals.diffNoise*... 5 p2 A. g! g5 t' y' S diag(sqrt(diffNoisePowerTrue)./sqrt(diffNoisePowerMeas)); ) U! Q" _" g1 C0 F3 Y" B* } 6 a, f3 D& m# I- V/ {) LmcSignals.sensNoise = randn(setup.nSamples,setup.nSensors);4 Y, k8 d, g* h9 P; `
sensNoisePowerMeas = var(mcSignals.sensNoise); 8 c. V$ [# [; r9 ^, q/ Y, \4 o' ?* VsensNoisePowerTrue = cleanSignalPowerMeas/10^(setup.ssnr/10); + c4 g1 `# h8 a: r) hmcSignals.sensNoise = mcSignals.sensNoise*...5 m9 ^& N# q# y" x1 Z5 G
diag(sqrt(sensNoisePowerTrue)./sqrt(sensNoisePowerMeas)); 0 {& }6 V9 P' ~ J4 w, X: w) E . s2 i/ U2 G' S& e6 JmcSignals.noise = mcSignals.diffNoise + mcSignals.sensNoise; 8 e- T0 M5 ^' Z, QmcSignals.observed = mcSignals.clean + mcSignals.noise;0 y c8 |6 u2 _1 L @$ i
# a- R" k q8 L
%------------------------------processing end-----------------------------------------------------------% v3 v6 @, ^) Q+ {4 K* \# y* ^
) B1 ?; x7 x! N+ H$ }
0 k" _2 N, e' J: v
2 T1 k. d9 S3 N5 N: @2 ~ K) P4 u
%----------------produce the noisy speech of MIc in the specific ervironment sets------------------------1 ]5 t \- L7 ?6 r& I: b7 M1 g2 o
8 c. f( S% ?8 [# N' t
noisy_mix1=10*mcSignals.observed(:,1); %Amplify the signals recieved by Mics with tenfold9 n6 |2 S5 |. r2 S5 V, c
noisy_mix2=10*mcSignals.observed(:,2); 7 f( l n* R8 V }9 x D5 ]noisy_mix3=10*mcSignals.observed(:,3); & o2 x: f' S1 ~. V/ P) ~; e3 N0 Knoisy_mix4=10*mcSignals.observed(:,4); ( a. h: ]! y. O7 sl1=size(noisy_mix1); 3 E: }8 d9 E. x. K3 X% Rl2=size(noisy_mix2); $ t# n" _6 g; A5 w4 ~: al3=size(noisy_mix3);5 ^( r' m& S1 x8 W% {# i. H
l4=size(noisy_mix4);8 b" D& k! K& F0 q1 z# l
audiowrite('diffused_babble_noise1_20dB.wav' ,noisy_mix1,setup.sampFreq);. ]" t7 k$ w8 j3 R
audiowrite('diffused_babble_noise2_20dB.wav' ,noisy_mix2,setup.sampFreq); ' E z$ w" D) l5 R. ]/ r Yaudiowrite('diffused_babble_noise3_20dB.wav' ,noisy_mix3,setup.sampFreq);) p. p1 s$ | c% y' v
audiowrite('diffused_babble_noise4_20dB.wav' ,noisy_mix4,setup.sampFreq);5 M" K7 G3 M5 g9 q1 N
/ y" L. w5 Y1 b" X5 e4 W
) c2 m! n. k/ Q4 u%-----------------------------end------------------------------------------------------------------------- / l3 E% Z1 X: m+ S' U9 [这个是主函数,直接运行尽可以得到想要的音频文件,但是你需要先给出你的纯净音频文件和噪声音频,分别对应着:multichannelSignalGenerator()函数中的语句:[cleanSignal,setup.sampFreq] = audioread('..\data\twoMaleTwoFemale20Seconds.wav'),和generateMultichanBabbleNoise()函数中的语句:[singleChannelData,samplingFreq] = audioread('babble_8kHz.wav') 。 $ n7 C, K4 S- [ z直接把它们替换成你想要处理的音频文件即可。( f; |+ D7 b A- e
% ~+ C4 r5 p5 R1 P 除此之外,还有一些基本实验环境参数设置,包括:麦克风的形状为线性麦克风阵列(该代码只能对线性阵列进行仿真建模,并且还是均匀线性阵列,这个不需要设置);麦克风的类型(micType),有全指向型(omnidirectional),心型指向(cardioid),亚心型指向(subcardioid,不知道咋翻译,请见谅) , 超心型(hypercardioid), 双向型(bidirectional),一般默认是全指向型,如下图1所示;麦克风的数量(nSensors);各麦克风之间的间距(sensorDistance);麦克风阵列的中心位置(arrayCenter),用(x,y,z)坐标来表示;麦克风阵列的高度(arrayHeight),感觉和前面的arrayCenter有所重复,不知道为什么还要设置这么一个参数;目标声源的位置(srcPoint),也是用(x,y,z)坐标来表示;目标声源的高度(srcHeight);麦克风阵列距离目标声源的距离(arrayToSrcDistInt),是在xy平面上的投影距离;房间的大小(roomDim),另外房间的(x,y,z)坐标系如图2所示;房间的混响时间(reverbTime);散漫噪声场的类型(noiseField),分为球形场(spherical)和圆柱形场(cylindrical)。 " i/ ]9 K j& u0 y" H) P( G% r2 X- ?: q6 P, U $ o) M$ h0 C# S+ @( X* F+ s