【正文】
principle of BPR,determination the subspace of a certain type of samples basing on the type of samples itself. If we can find out a set of multiweights neurons(Multiweights neuron works) that covering all the training samples, the subspace of the neural works represents the sample subspace. When an unknown sample is in the subspace, it can be determined to be the same type of the training samples. Moreover, if a new type of samples added, it is not necessary to retrain anyone of the trained types of samples. The training of a certain type of samples has nothing to do with the other ones. III. System Description The Speech recognition system is divided into two main blocks. The first one is the signal preprocessing and speech feature extraction block. The other one is the Multiweights neuron works, which performs the task of BPR. 1. Speech feature extraction Mel based Campestral Coefficients(MFCC) is used as speech features. It is calculated as follows: A/ D conversion; Endpoint detection using short time energy and Zero crossing rate(ZCR); Preemphasis and hamming windowing; Fast Fourier transform; DCT transform. The number of features extracted for each frame is 16, and 32 frames are chosen for every utterance. A 512dimensiona1Me1Cepstral feature vector(16 32? numerical values) represented the pronunciation of every word. 第 3 頁 2. Multiweights neuron works architecture As a new general purpose theoretical model of pattern Recognition, here BPR is realized by multiweights neuron Networks. In training of a certain class of samples, an multiweights neuron subNetwork should be established. The subNetwork consists of one input layer. one multiweights neuron hidden layer and one output layer. Such a subNetwork can be considered as a mapping 512:F R R? . 1 2 m( ) m in( , , Y )F X Y Y? … , WhereYi is the output of a Multiweights neuron. There are m hidden Multiweights neurons. i= 1,2, …,m, 512XR? is the input vector. IV . Training for MWN Networks 1. Basics of MWN works training Training one multiweights neuron subNetwork requires calculating the multiweights neuron layer weights. The multiweights neuron and the training algorithm used was that of Ref.[4]. In this algorithm, if the number of training samples of each class is N ,we can use 2N? neurons . In this paper , N =30. 12[( , , , )]i i i iY f s s s x??? , is a function with multivector input, one scalar quantity output. 2. Optimization method According to the ments in ,if there are many training samples, the neuron number will be very large thus reduce the recognition speed. In the case of learning several classes of samples, knowledge of the class membership of training samples is available. We use this information in a supervised training algorithm to reduce the work scales. When training class A, we looked the left training samples of the other 14 classes as class B. So there are 30 training samples in set 1 2 3 0: { , , }A A a a a? … ,and 420 training samples in set 1 2 420: { , , }B B b b? … , b. Firstly select 3 samples from A, and we have a neuron:1 1 2 3Y =f [( , , , )]k k ka a a 0 1 _ 1 2 3, = f [ ( , , , ) ]A i k k k iA A Y a a a a? , where i= 1,2,… , 30; 1 _ 1 2 3Y = f [ ( , , , ) ]B j k k k ja a a b, where j= 1,2,…420 ; 1_min(Y )BjV ? ,we specify a value r ,0r1 .If 1_ *AiY r V? ,removed ia from set A, thus we get a new set (1)A .We continue until the number of samples in set ()kA is () {}kA ?? ,then the training is ended, and the subNetwork of class A has a hidden layer of 1r? neurons. V. Experiment Results A speech database consisting of 15 Chinese dish’s names was developed for the