aaamachine, inc. | powder processing services - nisshin, matsubo

aaamachine, inc. | powder processing services - nisshin, matsubo

Cryogenic grinding is a technique that uses liquefied nitrogen cooled to an ultralow temperature (196/-320 degree F) to freeze and grind various materials including resins and foods. By using liquefied nitrogen produced by an affiliated company in the same group, we can conduct low-cost and highly efficient grinding Features

In order to assist you in testing and/or custom processing your material in USA, we need further detailed information on your material and material specifications.

Once AAAmachine receives the completed EDS, we will recommend the best equipment for the job and prepare a service quotation to include an estimate for the time required to complete the service.

AAAmachine, Inc's powder processing service provides testing and custom powder processing service that offers timely services in USA ranging from laboratory test, performance comparison, toll manufacturing and measuring analysis. AAAmachine, Inc. is able to produce large-scale toners by cooperating with partners/equipment manufacture in foreign countries.

From mm order to the submicron level, from foods, metals, rocks, slag, toners, and carbon to powder coating, and from heat-sensitive to abrasive powders, AAAmachine handles custom classification of every type of material. AAAmachine meets all needs for manufacturing ultra-fine powders, attaining extremely narrow particle size distribution. And by utilizing a world-class classifiers, it can handle classifying various powders from mm order to submicron/nano-meter level.

With its cutting-edge AAA rating powder processing technology, AAAmachine meets all the customers' needs. A world class powder measuring technology is also available. A world class precise measurements and an extraordinarily experienced and capable staff are the keys to the Company's success in this field.

raymond mill - raymond mill for sale - agico

raymond mill - raymond mill for sale - agico

It is a classic powder grinding equipment in the power industry and has a high market share. Raymond mill is grown in technology, stable in performance, energy-saving and efficient, durable, and powdered at a time. The fineness can be adjusted freely.

Raymond mill is mainly used in mining, metallurgy, chemical, building materials, petrochemical, coal, smelting, fertilizer, medicine, infrastructure, roads, engineering, abrasives, food, feed, plastics, and other industries. The hardness of grinding materials is 7 grade, the humidity is about 8%, it can grinds various raw materials.

Finished products can be adjusted in arbitrarily about 30-350 mesh. It is processable for cement clinker, quartz sand, feldspar, calcite, lime, limestone , dolomite, barite, graphite, gold and silver ore, rutile, titanium dioxide, kaolin, bentonite, aluminum ore, coke gemstone, fluorite, wollastonite, phosphate rock, rare earth, iron red, metal silicon, electrolysis , niobium alloy, silicon carbide, gold steel sand, smelting metal, calcium magnesium ore, coal, activated carbon, humic acid, carbon black, grass ash, coal gangue, slag, zircon sand, iron ore, iron fine powder, talc , granite, potash feldspar, marble, barite, wax feldspar, clay, glass, coke, petroleum coke, water slag, slag, fly ash, cement, pigment, clay, ceramsite sand, saline, muddy sand Milling materials such as additives, fire-fighting agents, curing agents, waste ceramics, waste bricks, refractory materials, aluminum ore, bauxite, etc.

Raymond Mill has been widely used in the processing of non-metallic minerals since its introduction to China for more than 20 years. However, with the extensive development of non-metallic minerals in the application of ultra-fine powders, downstream enterprises are increasingly demanding non-metallic mineral products, especially for product fineness, which makes traditional Raymond mills powerless. So Dalil Heavy Industry Technology Department has carried out in-depth analysis and research on traditional grinding machines, and solved many fatal defects such as frequent maintenance, high maintenance cost, and insufficient fineness of grinding, and developed a set. A new ultra-fine Raymond mill that is crushed, ground and ultra-fine.

Due to its high efficiency, low energy consumption, ultra-fine refinement, small floor space, low capital investment and no pollution in the environment, it is widely used in the grinding of mineral materials in metallurgy, building materials, chemical industry, mining and other fields. Non-flammable and explosive minerals with Mohs hardness less than seven grades and a humidity below 6% is suitable such as gypsum, talc, calcite, limestone, marble, potash feldspar, barite, dolomite, granite, kaolin, bentonite, medical stone, bauxite, iron oxide red, iron ore, etc.

The main machine rotates the central shaft through the double belt combined transmission mechanism. The upper end of the shaft is connected with the plum blossom frame, and the grinding roller and the plum blossom frame are assembled to form a dynamic grinding system, and a swing fulcrum is formed. When the plum blossom frame rotates, the grinding roller presses the peripheral grinding ring to form a complete crushing combination, and the grinding roller rotates around the grinding ring while being rotated by the friction force. With the rigorous blade system, located in front of the grinding roller, during the operation, the material is thrown up and fed into the grinding ring to form a pulverized material layer. The material layer is crushed by the extrusion and friction generated by the rotation of the grinding roller. In order to achieve the purpose of grinding.

Non-flammable and explosive ore with a Mohs hardness of 6 or less and a humidity below 6%, such as calcium carbonate, graphite, bentonite, wollastonite, coal powder, water slag, fly ash, alumina, zircon sand , quartz sand, gypsum, talc, calcite, limestone, marble, barite, dolomite, granite, kaolin, bauxite, iron oxide red, iron ore, power plant environmental protection (desulfurization denitration).

comparative analysis of classifiers for developing an adaptive computer-assisted eeg analysis system for diagnosing epilepsy

comparative analysis of classifiers for developing an adaptive computer-assisted eeg analysis system for diagnosing epilepsy

Malik Anas Ahmad, Yasar Ayaz, Mohsin Jamil, Syed Omer Gillani, Muhammad Babar Rasheed, Muhammad Imran, Nadeem Ahmed Khan, Waqas Majeed, Nadeem Javaid, "Comparative Analysis of Classifiers for Developing an Adaptive Computer-Assisted EEG Analysis System for Diagnosing Epilepsy", BioMed Research International, vol. 2015, Article ID 638036, 14 pages, 2015. https://doi.org/10.1155/2015/638036

Computer-assisted analysis of electroencephalogram (EEG) has a tremendous potential to assist clinicians during the diagnosis of epilepsy. These systems are trained to classify the EEG based on the ground truth provided by the neurologists. So, there should be a mechanism in these systems, using which a systems incorrect markings can be mentioned and the system should improve its classification by learning from them. We have developed a simple mechanism for neurologists to improve classification rate while encountering any false classification. This system is based on taking discrete wavelet transform (DWT) of the signals epochs which are then reduced using principal component analysis, and then they are fed into a classifier. After discussing our approach, we have shown the classification performance of three types of classifiers: support vector machine (SVM), quadratic discriminant analysis, and artificial neural network. We found SVM to be the best working classifier. Our work exhibits the importance and viability of a self-improving and user adapting computer-assisted EEG analysis system for diagnosing epilepsy which processes each channel exclusive to each other, along with the performance comparison of different machine learning techniques in the suggested system.

Epilepsy is a chronic neurological disease. The hallmark of this disease is recurring seizures. It has been cited that one out of hundred people suffers from this disorder [1]. Electroencephalography is the most widely used technique for diagnosis of epilepsy. EEG signal is the representation of voltage fluctuations which are caused by the flow of neurons ionic current. Billions of neurons maintain brains electric charge. Membrane transport proteins pump ions across their membranes. Neurons are electrically charged by these membranes. Due to volume conduction, wave of ions reaches the electrodes on the scalp that pushes and pulls the electron on the electrode metal. The voltage difference due to pull and push of the electrons is measured by voltmeter whose readings are displayed as the EEG potential. Neuron generates too small of a charge to be measured by an EEG, and it is the summation of synchronous activity of thousands of neurons that have similar spatial orientation which is measured by an EEG. Unique patterns are generated in the EEG during an epileptic seizure. These unique patterns help the clinicians during diagnosis and treatment of this neurological disorder. That is why EEG is widely used to detect and locate the epileptic seizure and zone. Localization of the abnormal epileptic brain activity is very significant for diagnosis of epileptic disorder.

Usually the duration of a typical EEG varies from few minutes to few hours but in case of prolonged EEG it can even last as long as 72 hours. This generates an immense amount of data to be inspected by the clinician which could prove to be a daunting task.

Advancement in signal processing and machine learning techniques is making it possible to automatically analyse EEG data to detect epochs with epileptic patterns. A system based on these techniques can aid a neurologist by highlighting the epileptic patterns in the EEG up to a significant level. Of course, the task of diagnosis should be left to the neurologist. However, the task of the neurologist becomes efficient as it reduces the data to be analysed and lessens up the fatigue. Along with classification these analysis software programs can also provide simultaneous visualization of multiple channels which helps the clinician in differentiating between generalized epilepsy and focal epilepsy.

It is well known that an epileptic seizure brings changes in certain frequency bands. That is why usually the spectral content of the EEG is used for diagnosis [2]. These are identified as (0.44Hz), (48Hz), (812Hz), and (1230Hz). Noachtar and Rmi mention almost ten types of epileptic patterns. However, most of the existing work only focuses on one of the epileptic patterns, that is, 3Hz spike and wave which is a trademark for absence seizure. Other types of the patterns are rarely addressed [3].

Computer-assisted EEG classification involves several stages including feature extraction, feature reduction, and feature classification. Wavelet transform has become the most popular feature extraction technique for EEG analysis due to its capability to capture transient features, as well as information about time-frequency dynamics of the signal [4]. Other previously used feature extraction approaches for epilepsy diagnosis include empirical mode decomposition (EMD), multilevel Fourier transform (FT), and orthogonal matching pursuit [59]. Feature extraction is followed by feature reduction to reduce computational complexity and avoid curse of dimensionality. Most commonly, the reduced feature vector consists of statistical summary measures (such as mean, energy, standard deviation, kurtosis, and entropy) of different sets of original (unreduced) features, although other methods such as principal component analysis, discriminant analysis, and independent component analysis have also been used for feature reduction [4, 7, 10, 11]. Feature extraction/reduction is followed by classification using a machine learning algorithm, such as artificial neural networks (ANN), support vector machines (SVM), hidden Markov models, and quadratic discriminant analysis [8, 1114].

A very important and novel phase of our system is user adaptation mechanism or retraining mechanism. There are multiple reasons according to which introduction of this phase has lots of advantages. During this phase, system will try to adapt its classification as per users desire. It has been cited that sometimes even the expert neurologists have some disagreement over a certain observation of an EEG data. There is also a threat of overfitting by the classifier. In order to keep the classifier improving its performance with the encounter of more and more examples, we have introduced this user adaptive mechanism in our system. We consider the existing systems as dead because they cannot improve their classification rate after initial training. They do not have any mechanism of learning or improvement from neurologists corrective marking [1517]. The agreement between different EEG readers is low to moderate; our adaption mechanism helps the user in catering this issue as our system tries to adapt the detection according to the users corrective marking. The new corrective markings generate new examples with improved labels. Hence, it populates the training examples with newly labelled ones. So after retraining machine learning algorithms in the system, users adapt to set of choices.

In the next section we will explain our proposed method which will be followed by the results. In the results section, we will explain how SVM performs better than QDA and ANN in our proposed method. We will also show that exclusive processing of each channel results in a significant improvement in the classification rate. Here epileptic pattern and epileptic spikes will be used as an alternative to each other.

Computer-aided EEG analysis systems use the neurologists marking and labelling of the EEG data as a benchmark to train themselves during initial training phase. But after initial training phase, these systems have no simple mechanism for these neurologists to improve systems classification after encountering any false classification. So we have proposed a method by which systems classification can be improved by the user in a relatively simpler way. This analysis system only tries to detect the epileptic spikes as mentioned by Noachtar and Rmi. Later it adapts its detection of epileptic spikes exclusively for every user (Figure 4).

In this proposed system, we are processing each channel for each epileptic pattern exclusive to each other. This exclusive processing of each channel not only helps the user in diagnosing localized epilepsy but also eases up the classifiers job. We have considered that different epileptic patterns are independent to each other and their separate handling will help us in avoiding error propagation from one epileptic pattern type detection to the other. Our systems working has two major phases (A) initial training phase and (B) adaptation phase. These two major phases have further three parts which are (1) feature extraction, (2) feature reduction, and (3) classification. Next we will briefly explain all of these steps.

To decide which parts of the signal are epileptic and which are not we first divided whole of the signal in small chunks known as epochs. Then DWT was applied on those epochs so that visibility of epileptic activity can be enhanced which is distinguished by some spectral characteristics. These features are then processed to make them more suitable for the classification technique.

(a) Epoch Size. The first important part of the feature extraction is epoch selection. Epoch is a small chunk of the signal which is processed at a time. The size of the epoch is very important. The larger it is the less accurate it will be. The smaller it is the higher the processing time will be. After testing different epoch sizes, we found epoch size of nonoverlapping 1sec window to be best yielding in terms of accuracy. It also reestablished the work of Seng et al. [18] (Figure 1).

(b) DWT. As discussed in Introduction, spectral analysis is very informative while examining the epilepsy suspected patients EEG. There are profound advantages of wavelet decomposition which is a multiresolution analysis technique. A multiresolution analysis technique allows us to analyse a signal for multiple frequency resolutions while maintaining time resolution unlike a normal frequency transform. Wavelet decomposition allows us to increase frequency resolution in the spectral band of our interest while maintaining the time resolution; in short we can decimate these values simultaneously in time and frequency domain.

During wavelet transform, the original epoch is split into different subbands: the lower frequency information is called approximate coefficients and the higher frequency information is called detailed coefficients. The frequency subdivision in these subbands helps us in analysing different frequency ranges of an EEG epoch while maintaining its time resolution [4, 8, 13]. The choice of coefficients level is very important as the epileptic activity only resides in the range of 030Hz. Coefficients levels of the DWT are determined with respect to sampling frequency. So, the detailed levels of interest are adjusted on the run according to the sampling frequency such that we may get at least one exact value of the closest separate (0.44Hz), (48Hz), (812Hz), and (1230Hz) components of the signal. We discarded all the detailed coefficient levels which were beyond the 030Hz range.

(c) Statistical Features. After the selection of detailed coefficients which represent the frequency band of our interest, we calculated the statistical features by calculating the mean, standard deviation, and power of these selected wavelet coefficients. These statistical features are inspired from Subasi and Gursoy work [13].

(d) Standardization. These statistical features were then standardized. During training stage -score standardization was applied on these features [19]. This standardization is just like usual -score normalization, but as we do not know the exact mean and standard deviation of the data (to be classified) during classification/test stage, we used the mean and standard deviation of the training examples during training stage for standardizing (normalizing) the features during classification stage. We normalized the features by subtracting and dividing them by training examples mean and standard deviation, respectively.

In order to avoid overinterpretation by redundant data and misinterpretation by noisy data we applied feature reduction method. Inclusion of this part increases the processing time, thus exacerbating the latency.

Dimensionality reduction using principal component analysis (PCA) is based on a very important trait that is variance of the data. PCA develops the nonlinear mapping in such a way that it maximizes the variance of the data, which helps us in discarding that part of the data which is marked by lesser variances. This reshaping and omission not only removes the redundant data but also lessens up the noise.

During training stage PCA was applied on these features in order to reduce the redundant and/or noisy data. We kept the components which projected the approximate 95% of the total variance. We were able to reduce the 21 features into 9. Then we fed these reduced features to classifiers trainer. Here as per our observation we again assumed that the EEG data is stationary for a small length. So, during the testing stage, we took the PCA coefficients matrix from training stage and multiplied it with the standardized statistical features of the blind test data and then fed the top 9 features to classifier.

Classification is a machine learning technique in which new observations belonging to a category are identified. This identification is based on the training set which contains the observations with known labelling of their category. These observations are also termed as features. We tried three types of classification methods: (1) SVM, (2) QDA, and (3) ANN (Figure 3).

The reduced features were fed to these classifiers. Here the reduced features mean that those statistical features of the selected wavelet coefficients are reduced using PCA as described in previous section. All of the three processing parts were exclusive for each channel and each epileptic pattern. So like previous parts the classifiers were also trained and tested exclusively for each channel.

Our system requires individual labelling of channels. There is a separate classifier for each channel and for each epileptic pattern type. So, the total number of classifiers is equal to the product of total number of channels by ten where ten represents the number of epileptic pattern described by Noachtar and Rmi [3].

(e) Support Vector Machine. Support vector machine (SVM) is a supervised learning models machine learning technique. SVM tries to represent the examples as points in space which are mapped in a way that points of different categories can be divided by a clear gap that is as wide as possible. Afterwards, that division is used to categorise the new test examples based on which side they fall on.

(f) Quadratic Discriminant Analysis. Quadratic discriminant analysis (QDA) is a widely used machine learning method among statistics, pattern recognition, and signal processing to find a quadratic combination of features which are responsible for characterizing an example into two or more categories. QDAs combination of discriminating quadratic multiplication factors is used for both classification and dimensionality reduction.

(g) Artificial Neural Network. Artificial neural network (ANN) is a computational model which is inspired from animals central nervous system. That is why ANN is represented by a system of interconnected neurons which are capable of computing values as per their inputs. In ANN training, the weights associated with the neurons are iteratively adjusted according to the inputs and the difference between the outputs with expected outputs. The iteration gets stopped when either the combination of neurons starts generating the expected results within an error of a tolerable error range or the iteration limit finishes up.

In order to keep the classifier improving its performance with the encounter of more and more examples, we have introduced a user adaptive mechanism in our system. Our system allows the user to interactively select epochs of his choice by simply clicking on the correction button. While using our system, when a user thinks that a certain epoch is falsely labelled/categorised, our system allows him to interactively mark mark that label as a mistake. These details will be saved in a log in the background and they will be used to retrain the classifier to improve its classification rate and adapt itself according to the user with the passage of time. When the user is going to select the retraining option in our system, then classifiers will retrain themselves on the previous and the newly logged training examples. As every user has to log in with his personal ID, every corrective marking detail will only be saved in that users folder and only classifier will update itself for that user. Hence, the systems classifier tries to adapt itself according to that user without damaging anyone else classification.

The concept behind the inclusion of the retraining is that if there is more than one example with same attributes but different labels, the classifier is going to get trained to the one with most population. The users corrective marking will increase the examples of his choice, thus making that classifier adapt itself to the users choice in a trivial way. Every user will have exclusive classifiers trained for him and his marking will not affect other users classifier. As we know, the users sometimes do not agree on the choice of the epileptic pattern or its type. The exclusive processing for each user will help the same software keep the system trained for every user and it will also let different users compare their markings with each other.

We do not have any standard right now to measure which neurologist is the most righteous among a disagreeing group of neurologist users. So we kept the corrective markings of each user to his account so that it may not interfere with the one who may not agree on his choice. So, the developed system is used to facilitate the neurologists selection to the user according to his own choice and after initial training on every retraining it tries to adopt more users. This system does not want to dictate to the neurologist but rather learn from him to adapt him to save his time.

We want the classifier to think like the user and supplement him by highlighting the epochs of his choice, so the gold standard after few retraining mechanisms will be the user himself. Already tested examples with new labels inclusion in the training examples for the retraining will bias the classifiers choice in favour of user.

In this section, we will discuss the results in detail. At first, we will describe the datasets which we used to train, test, and validate our method. Then we will discuss their versatility (Figure 7).

Two labelled datasets of epilepsy suspected surface EEG data were available to us. Both of these datasets have lots of versatility in between them in terms of ethnicity, age, gender, and equipment. The datasets available to us were about generalised absence seizure which is characterized by the 3Hz spike and wave epileptic pattern in almost each channel. That is why we have classification results available only for one type of epilepsy which is absence seizure.

This database is the online available surface EEG dataset [20] which is provided by Children Hospital Boston and Massachusetts Institute of Technology and it is available at physioNet website [10]. It contains 916 hours of 23 channels scalp EEG recording from 24 epilepsy suspected patients. This ECG recording is sampled at 256Hz with 16-bit resolution. The 23rd channel is same as 15th channel (Table 1).

The second database of EEG datasets is provided by our collaborator at Punjab Institute of Mental Health (PIMH), Lahore. Its sampling frequency is 500Hz and it was recorded on 43 channels (among which 33 channels are for EEG). This dataset consists of 21 patients EEG recording.

Data which interests us lies in between the frequency range of 0.3Hz to 30Hz. So after applying DWT with db4 mother wavelet, we have to select detailed coefficients with this frequency range. So in case of 256Hz sampled CHBMIT dataset, we have to go to at least 3 levels of decomposition and discard the earlier two as it is demonstrated in Figure 2. In order to get the discriminating information between different types of epileptic patterns and identifying them correctly without mistaking them with each other, decomposition of this detailed coefficient further in Beta, Alpha, Theta, and Delta is hugely helpful. So we further decomposed them until the 7th level. Hence, we used the DWTs detailed coefficients of levels 3, 4, 5, 6, and 7 for 256Hz sampled CHB-MIT dataset (Table 2).

After the selection of the wavelet coefficients, we calculated the statistical feature out of them. The statistical features were the mean, power, and standard deviation of all of the selected coefficients.

After the selection of detailed coefficients, we calculated the statistical feature out of them. The statistical features were the mean, power, and standard deviation of all of the shortlisted detailed coefficients.

During training stage, we first used simple -score normalization to standardize the features [19] before applying feature reduction. But the real issue arose when we tried to normalize them during testing stage. One way of doing this is that we keep all of the examples and apply -score on them along with the new test data. Instead of this time taking process, we made an assumption on our observation that mean and standard deviation does not deviate a lot. It is analysed in this study that the EEG time series are assumed to be stationary over a small length of the segments. So we used the mean and standard deviation of the training examples from the training stage to normalize the test examples. Figures 5 and 6 illustrate our observation, in which you can see that there is not much deviation in train and train + test examples mean and standard deviation.

Classification is used in machine learning to refer to the problem of identifying a discrete category to which a new observation belongs. Observations with known labels are used to train a classification algorithm or classifier using features associated with the observation. For CHBMIT database, we had to train 220 classifiers in initial training stage. The calculation behind 220 is the 22 channels multiplied by 10 types of epileptic pattern. The 23rd channel was same as 15th channel. For PIMH dataset 330 classifiers were trained where 33 channels of EEG were utilized. We tried three different classifiers and found SVM to be the most accurate.

We have used blind validation mechanism for the ten different feature data distributions to estimate the classification performance. These 10 different and separate blind data distributions were taken from a huge set of EEG dataset. These 10 data distributions we randomly divided into two groups. We trained our classifier on one half of the distribution and tested it on the other half. We repeated that on all ten distributions. Then we calculated the average of the classification rate for the all ten distributions.

3229 out of 3297600 epochs were randomly taken for ten times from CHBMIT dataset. Each time half of them were used to train and half of them were used to test the initial classification. The average of the sensitivity, specificity, and accuracy for these ten distributions is considered as the initial training phase performance.

In this study, we have analysed that even in the case of absence seizure epileptic patterns do not appear in the exact same way in each channel. Handling of each channel exclusive to each other was also another very important decision. We tested the classification in both ways, that is, one classifier for all of the channels at once versus one separate classifier for each channel (Figure 8).

This processing of each channel exclusive to each other improved over average accuracy from approximately 91% to approximately 95% in case of SVM. So for SVM, there is a significant improvement of 4% by this change. In case of QDA accuracy rose from 91% to 94% with an improvement of 3% and in case of ANN it rose from 91.8% to 92.9% with an improvement of 1.1%.

Results show that SVM suites our method in the most efficient way. ANN has a lesser classification time and LDA has a lesser training time as compared to SVM, but considering the sensitivity and classification improvement through corrective marking, we think that SVM is the better choice than LDA and ANN. In upcoming sections, we have shown the results for all three types of classifier.

(a) Adaptation Mechanism. To test the adaptation mechanism 807 corrective epochs were marked by the user for a CHBMIT dataset file, and he marked the same amount of epochs for each channel. These corrective markings were saved in his log as training examples. These corrective markings as the new examples along with the 32290 epochs of initial training stage were used to retrain the classifier. The number 32290 has come from the 3229 randomly selected epochs from whole of the CHBMIT dataset for the ten separate times during initial training phase. Then later the performance of the classifier after retraining was judged again on another random 3000 epochs (Figure 14).

In case of PIMH dataset 57 corrective epochs were selected for PIMH dataset. This time 19374 epochs of the PIMH dataset were used along with the 57 corrective markings, as for the PIMH we randomly selected the 3229 numbers of epochs for the six times. The retrained classifier was tested on the 2361 remaining epochs.

(b) Support Vector Machine. We used the support vector machine classifier package available in MATLAB Bioinformatics Toolbox. We found linear kernel to be the most accurate SVM kernel with 50 as the box constraint.

(c) CHBMIT. For CHBMIT dataset, initial training of the classifier resulted in 96.3% average accuracy, 97.4% average specificity, and 93.5% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure. After initial training our specificity is better than that of Shoeb [10] and Nasehi and Pourghassem [21] who used the same dataset to validate their technique with different features and application technique. This shows that our technique is providing better results even at the initial training phase (Figure 11).

In Table 3 we have shown the average initial classification and retrained classification results of our system for each channel. In this system, we have shown that after correction of few epochs there is a visible improvement in the systems classification. The average accuracy of the system rose from 95.5% to 96.3%.

(d) PIMH. For PIMH dataset, initial training of the classifier resulted in 90% average accuracy, 94% average specificity, and 80% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure (Figure 12).

In Table 4, we have shown the average initial classification and retrained classification results of our system for each channel. Table 4 shows that our technique is robust and it works also on a different dataset. The average accuracy of the system rose from approximately 89% to 90%.

(a) CHBMIT. For CHBMIT dataset, initial training of the classifier resulted in 94% average accuracy, 96% average specificity, and 90% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure. After initial training our specificity is better than that of Shoeb [10] and Nasehi and Pourghassem [21] (Figure 13).

In Table 5, we have shown the average initial classification and retrained classification results of our system for each channel. In this system, we have shown that after correction of few epochs there is visible improvement in the systems classification. The average accuracy of the system rose from 94% to 95%.

(b) PIMH. For PIMH dataset, initial training of the classifier resulted in 90% average accuracy, 95% average specificity, and 73% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure.

In Table 6, we have shown the average initial classification and retrained classification results of our system for each channel. Table 6 shows that our technique is robust and it works also on a different dataset. The average accuracy of the system rose from approximately 89% to 90%.

(a) CHBMIT. For CHBMIT dataset, initial training of the classifier resulted in 92.88% average accuracy, 98.66% average specificity, and 75.75% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure (Figure 9).

In Table 7, we have shown the average initial classification and retrained classification results of our system for each channel. In this system, we have shown that after correction of few epochs there is visible improvement in the systems classification. The average accuracy of the system rose from 92.88% to 93.96%.

(b) PIMH. For PIMH dataset, initial training of the classifier resulted in 84% average accuracy, 94.8% average specificity, and 56.5% average sensitivity for 3Hz spike and wave which is a characteristic of absence seizure (Figure 10).

In Table 8, we have shown the average initial classification and retrained classification results of our system for each channel. Table 8 shows that our technique is robust and it works also on a different dataset. The average accuracy of the system rose from approximately 84% to 85.43%.

Computer-assisted analysis of EEG has tremendous potential for assisting the clinicians in diagnosis. A very important and novel phase of our system is user adaptation mechanism or retraining mechanism. Introduction of this phase has importance in many aspects. In this phase, system tries to adapt its classification according to users desire. Moreover, this technique personalizes the classifiers classification. It has been cited that sometimes even the expert neurologists have some disagreement over a certain observation of an EEG data. This system will be useful for disagreeing users and it will also help them in comparing their results with each other.

There is also a threat of overfitting by the classifier. In order to keep the classifier improving its performance with the encounter of more and more examples, we have introduced this user adaptive mechanism in our system. We consider the existing systems as dead because these cannot improve their classification rate after initial training (during software development). The self-improving mechanism after deployment makes our tool alive. This system can be made part of the whole epileptic diagnosis process. It will highlight the epileptic spikes among the whole EEG, thus leading to reduced fatigue and time consumption of a user. We obtained high classification accuracy on datasets obtained from two different sites, which indicates reproducibility of our results and robustness of our approach.

In the future, we are planning to make this a web based application; neurologists can log in and consult each others reviews about a particular subject. This will make our system experience a whole versatility of examples and learn from all of them. Integration of the video and its automatic analysis (video EEG) can help a neurologist in diagnosing epilepsy in a better way, whereas this can also help him in distinguishing between psychogenic and epileptic seizures. We would also be investigating how much overfitting is an issue in the reported performances which are now even touching 100% based on some claims. There is a need for method/criteria which could limit these algorithms improving their detection on a limited number of available examples.

In the future, we will also include a slider in the system which will allow the user to adjust the sensitivity and specificity before retraining. This assisting system is more like a detection tool which is continuously learning with encounter of better examples. More and better examples will certainly improve its performance. The agreement between different neurologists over the EEG readings is low to moderate. If we could find the agreement on at least few of the epileptic patterns correspondence with epileptic disease then we can take this tool further ahead and use it for diagnosis instead of just assistance.

One of the biggest limitations to this study is the unavailability of non-3Hz spike and wave data. Even though we have included the data features of the entire epileptic frequency ranges exclusive to each other, proof testing on the data will certainly prove worthy for the progress of these assisting tools toward a diagnostic tool.

Copyright 2015 Malik Anas Ahmad et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

calcium carbonat (gcc) | hosokawa alpine

calcium carbonat (gcc) | hosokawa alpine

Calcium Carbonate (GCC) produced from chalk, limestone, calcite or marble have developed in recent years from just being a simple cheap filler to highest quality functional additives. GCC is used as dry powder or slurry in many industrial applications for plastic, paint, rubber, sealants or paper. Furthermore, a strong trend and shift to significantly finer grades is clearly observed in the range of finer than d97 < 10 m to d97 < 1.6 m. Special designed very steep particle size distributions as an functional additive for the production of breathable film in the hygiene field (for example baby diapers) or building material represent a further demanding challenge. GCC as an additive in almost all polymers must be subjected to a surface treatment, known as coating by means of technical stearic acid. In recent years the demand for highest coating GCC quality increased significantly. Highest hydrophobicity, few ppm residues in the upper size area of the particle size distribution, best 100% coating grade are required here. These demanding requirements are already met worldwide with Hosokawa Alpine equipment. For standard GCC fillers in the range d97 = 8 - 45 m, a whole range of grinding / classifying processes are available with the focus on the lowest possible specific energy consumption.

Continuous development enables today 's ball mill classifying systems to cover a wide capacity and fineness range of d97 = 5 - 45 m. Vertical Dry Agitator Ball Mills, such as the ATR System, are used for the energy-efficient production of GCC powders in the range of d97 = 2.5 m to 10 m in the field of low to medium capacities. Greater finenesses up to d97 = 1.6m, especially at high outputs, are most efficiently produced by vertical wet mill systems such as the ANR-CL and either used directly as a slurry in the paint or paper industry or by means of an Atritor Cell Mill in a single process step dried, optionally coated and top cut cleaned by means of an integrated air classifier.

Dry processed GCC powder for the plastics industry achieves the highest quality available in the market by coating in a fully optimized Contraplex pin mill system with feed powder heating and top cut control by an MS-classifier with less than a few ppm residues on a 20 m wet sieve.

Manufacture and supply of GCC at consistent high qualityReadiness to deliver all GCC qualities, anytimeShifting the product portfolio significantly into the profitable range < d97 = 10 mEnsuring a constant and consistent particle size distributionPerfect top cut limitation coated and uncoated GCCAssuring highest possible quality of coated GCCLead the competition in fineness and qualityUse of processes with lowest specific energy consumptionUse of state-of-the-art machine diagnostics - Predictive maintenance

denitrification mechanism and artificial neural networks modeling for low-pollution water purification using a denitrification biological filter process - sciencedirect

denitrification mechanism and artificial neural networks modeling for low-pollution water purification using a denitrification biological filter process - sciencedirect

Low-pollution water from a polluted river was treated by denitrification filter.Denitrifying bacteria concentrated in the lower layer of the up-flow filter.NO2, NO3 and water temperature significantly affected the microbial community.Artificial neural networks accurately predicted effluent nitrogen concentrations.

Low-pollution water treatment is an important process for improving surface water quality. In the present study, a denitrification biological filter (DNBF) was used to treat synthetic low-pollution water, representing the typical water present in a heavily polluted urban seasonal river. The feasibility of alkali treated corncob as a denitrification slow-release carbon source was investigated. Furthermore, the performance of DNBF with different media (ceramsite, quartz sand and polypropylene plastics) and operating conditions was studied. The DNBF denitrification mechanism was analyzed and an artificial neural network model was established to predict the water quality of DNBF treated low-pollution water effluent. Results showed that when the alkali treated corncob dosage was 20g and hydraulic retention time (HRT) was 2h, the denitrification efficiency of DNBF with ceramsite as the filter medium was highest (94.7% for nitrate nitrogen and85.6% for total nitrogen), with the effluent total nitrogen concentration meeting Class IV of the Environmental Quality Standard for Surface Water (GB 3838-2002, China). The total nitrogen removal efficiency increased with increasing HRT (0.52.0h) and alkali treated corncob dosage (020g). The denitrification rates established for DNBF with different media were ranked in the following order: ceramsite medium DNBF>polypropylene plastic medium DNBF>quartz sand medium DNBF. The relative abundance of denitrifying bacteria was highest (10.07% for quartz sand medium DNBF, 13.92% for polypropylene plastic medium DNBF and 23.13% for ceramsite medium DNBF) in the lower layer of the DNBFs, indicating that denitrifying bacteria are concentrated in the lower layer of the up-flow DNBF. Environmental factors (nitrite nitrogen, nitrate nitrogen, water temperature and pH) were found to affect the DNBF microbial community structure. The established artificial neural network model accurately predicted the effluent nitrogen concentration in DNBF treated low-pollution water. DNBF provides a feasible system for the treatment of heavily polluted urban seasonal rivers, achieving high total nitrogen removal efficiency using a low cost and easy operation method.

effect of earthworms on the performance and microbial communities of excess sludge treatment process in vermifilter - sciencedirect

effect of earthworms on the performance and microbial communities of excess sludge treatment process in vermifilter - sciencedirect

Previous studies have shown that the stabilization of excess sludge by vermifiltration can be improved significantly through the use of earthworms. To investigate the effect of earthworms on enhancing sludge stabilization during the vermifiltration process, a vermifilter (VF) with earthworms and a conventional biofilter (BF) without earthworms were compared. The sludge reduction capability of the VF was 85% higher than that of the BF. Specifically, elemental analysis indicated that earthworms enhanced the stabilization of organic matter. Furthermore, earthworm predation strongly regulated microbial biomass while improving microbial activity. Denaturing gradient gel electrophoresis (DGGE) analysis showed that the most abundant microbes in the VF biofilms and earthworm casts were Flavobacterium, Myroides, Sphingobacterium, and Myxococcales, all of which are known to be highly effective at degrading organic matter. These results indicate that earthworms can improve the stabilization of excess sludge during vermifiltration, and reveal the processes by which this is achieved.

The vermifilter (VF) had higher sludge reduction ability than a biofilter (BF). Elemental analysis showed that earthworms enhanced the stabilization of excess sludge. Dehydrogenase activity in VF biofilms was much higher than that in the BF. Specific microbes in earthworm cast enhanced the degradation of organic matter.

Get in Touch with Mechanic
Related Products
Recent Posts
  1. idpa classifier course of fire

  2. spiral qo'yish video ideas for kids

  3. efficient medium classifier in harare

  4. tin spiral concentrator zinc mineral spiral classifier

  5. small brick and tile classifier in adelaide

  6. spiral 8 hours

  7. spiral ham

  8. spiral recipes

  9. mineral process spiral classifier in indonesia

  10. spiral classifier weir

  11. rome high quality large ferrosilicon ball mill sell at a loss

  12. methods of mining antimony coal russian

  13. cone x-ray dental

  14. grinding meals in sikkim

  15. feeder to the crusher mm 140

  16. basic principal of magnetic seprator

  17. stone crusher manufraturers in germany

  18. quartz sand jaw stone crushing plants

  19. symons cone crusher

  20. sizes of crusher machines