氏名: Allan Kardec Duailibe Barros Filho (d953406)

論文題目: Adaptive Noise Cancellation of Cardiac Signals


論文概要

So far the study of the cardiac functions have been of interest to physicians. For example, anaesthetist and surgeons frequently have great need for cardiac output (the amount of blood ejected by the heart in one minute) values in response to anesthesia, blood loss, traumatic injuries and the effects of surgical procedures. In the case of cardiac patients, it is necessary for the physician to have more information on how the heart is working in order to decide the level of exercise stress to recommend.

Perhaps the simplest solution in order to evaluate the condition of the heart is to have a look at the heart itself through an invasive intervention. However, this solution is hardly desired, not only due to the intervention, but also due to the fact that it precludes the measurements in many cases, for example during exercise. Therefore, researchers have been searching for non-invasive methods to fulfill this need.

The most well-known type of non-invasive cardiac measurement is the classical electrocardiogram, or ECG. The ECG can provide many insights about the cardiac functions, such as the heart rate (the number of beats in one minute.), which can be used with success for different disease diagnosis. The ECG itself can provide also much information about the cardiac status. Moreover, in the last decades, there was an increased interest on impedance plethysmography. Impedance plethysmography, also called impedance cardiography (ZCG), is the study of the cardiac status by means of electrical impedance measurements on the thorax. ZCG provide information that ECG may not, such as the cardiac stroke volume (the amount of blood ejected by the heart in one beat).

The act of extracting information from a signal is not so easy as it could apparently be. In the case of cardiac related ones, which are generally extracted from the thorax through some electrodes, many undesired signals may come out together with the cardiac one. Examples are the interference generated by the electrodes themselves, respiration, body movement, etc.

In this thesis, we are interested in the signal processing of heart generated signals so that they turn better in some sense to be analyzed by a professional. We concentrated our study on this problem, i.e., the removal of this undesired signals from the cardiac one. We narrowed further our study to only electrocardiographic and impedance cardiographic signals.

We will take the impedance cardiographic signal for a resume of the techniques that can be (and actually were) used for the removal of those interferences, or "noise". Assessing the cardiac status using impedance cardiography has been a scope of hundreds of works. However, the respiratory components and the movement artifacts cause changes in the baseline of the signal, that may produce measurements errors. The techniques which were proposed to deal with this problem are discussed below.

First, breath holding was suggested to eliminate the respiratory components. However, it can alter the the stroke volume during measurements. Another possible solution is to average, for example five heart beats and calculate the stroke volume. This technique is called ensemble averaging, but it eliminates the beat-by-beat characteristic of the cardiac output because of the averaging.

However, given that the respiratory signal has its energy mostly concentrated in frequencies lower than the heart one, if the subject under observation is in resting condition, what can be carried out is to make use of this information and use the so-called filtering techniques. For example, one work suggested to use a band-pass filter around the cardiogenic frequency and another manuscript, a high-pass filter below it.

Nevertheless, the problem was not completely solved by the techniques cited above. As we can see, they were proposed in order to eliminate respiratory artifacts. And they are not the only ones that actually adversely affects the ZCG (or ECG) signals. For example, there is still the problem of movement and electrode artifacts whose spectra are unknown and it may sometimes overlap the cardiac signal spectra. And these interferences may increase during exercise. One solution to eliminate these movement artifacts is by using adaptive filters where, in a certain extent, "a priori" knowledge of the signal statistics is not necessary. This is the approach we use in this work.

There are many ways to extract the desired information from the cardiac signal. All of them have the merit of minimizing the interference, but they have disadvantages, that we shall discuss now.

Ideally, the filter should eliminate completely the interference and let only the cardiac signal come out. This task is not so simple as one can see. Therefore, in this thesis we try to eliminate some of the problems that we found in previous approaches. For example, as one can easily imagine, the heart beating is not perfectly periodic. Thus if, say, one uses a high-pass filter with a constant cutoff frequency, he or she may find problems when the heart beating rate increases as in the case of during exercise. For example, if the heart fundamental frequency doubles its value as it use to happen during exercise, a lower cutoff frequency will not eliminate the interference that will appear above it.

To cope with this problem the adaptive filtering is proposed. We use a filter that automatically adjusts its cutoff frequency to the heart fundamental one. Moreover, we propose a filter that also eliminates the interference in between two heart frequency harmonics. These two are already new concepts, that were not used in the previous approaches. All of them used classical filtering techniques, as we could see.

However, this filter is not perfect. It presents some problems. When one uses adaptive filtering, it has a "learning time". That is, if a transient happens and the signal parameters change, the filter will take some time to learn the new parameters. Another problem is that the filter which we proposed cannot eliminate the interference if it is overlapping in frequency the cardiac signal.

Therefore, we tried another approach based on higher order statistics, and propose a neural network to cope with the problems found in the proposed filter. Using this neural network, there was no problem if the interference overlaps in frequency the cardiac signal. We also found that this neural network was reasonably fast.

In general terms, this thesis can be divided into two parts. The first one deals with the initial idea of an algorithm to filter cardiac signals, which uses second-order statistics. This part is distributed along the chapters II, III and IV. The second part uses a much powerful technique based on higher-order statistics. With this technique, we could find solutions for the problems that second-order statistics could not solve, this is carried out in Chapter V. Therefore, Chapter I gives the introduction, where the faced problem is discussed more deeply, and the approaches to be used are discussed. Chapter II presents a filter based on second-order statistics to eliminate the interference. Chapter III studies the behavior of that filter in a mean squared error framework and Chapter IV continues this study, but focusing on the problem of bias and noise overlapping in frequency. Chapter V studies an algorithm that eliminates the problem of frequency overlapping encountered in the first approach. Chapter VI gives the conclusion of this study and points out directions of research.

We shall now have a brief look at the works carried out in each chapter.

First, we make use the quasi-periodic nature of the cardiac signal and of the property that has a signal of being written as a series of Fourier. The approach is very simple. Given the frequency of the signal, we try to estimate the amplitude of the coefficient related to the fundamental frequency and of the ones related to its harmonics, to regenerate the original signal. In other words, we use an "adaptive algorithm" to learn the Fourier series coefficients that constitute the cardiac signal involved. That is, we use an event-related filter, where the event is the periodicity of the signal. In this field, Vaz and Thakor used sine and cosine waves as the reference input in a Fourier linear combiner (FLC) to estimate evoked potentials. Instead of using the FLC itself, in Chapter II, we used a scaling factor to enhance flexibility when choosing the filter parameters, and rename the FLC a scaled FLC (SFLC).

As we said before, in order to estimate the Fourier coefficients, it is necessary to use an adaptive algorithm. Thus, for the SFLC, we used the least-mean square (LMS) algorithm. The advantage of this algorithm is its simplicity and easy of implementation.

In Chapter III, we studied the mean squared error behavior of FLC and SFLC in a general framework, called FLC-based filters, so that the results could be easily extended to the FLC. As we are dealing with an adaptive estimation of Fourier coefficients, which are related to a fundamental frequency and its harmonics, there are many factors that should be focused on the study of the mean squared error. We try to concentrate on all of them in this chapter.Therefore, we studied the role that number of harmonics, step size and the fact of the signal be stationary or non-stationary plays on the mean-squared error of the filter.

For every adaptive algorithm, there is a tradeoff between the error and the learning time. That is, bigger the error, smaller the learning time, and vice-versa. Still in Chapter III, we found a surprising result: that the step size of the adaptive algorithm can be altered recursively so that it reaches the smaller error with the greatest speed at each time. We applied this result to estimate a stroke volume measured in an actual experiment. We also found that the error for this step size is inversely proportional to the level of noise added to the signal. Additionally, at the end of the referred chapter, we give to the SFLC user the possibility of choosing the filter error. This was motivated by the frequent questions that arised from the SFLC users.

The FLC-based filters show, however, a bad performance when the noise spectrally overlaps the desired signal or when this signal is biased. We studied these effects in Chapter IV. We carried out this study using again a mean squared error framework.

The overlapping effect is very simple to intuitively understand. By looking at the FLC-based filters power spectrum, we would find that it is a bank of notch filters. Therefore, if the desired signal and noise overlap in frequency, it is straightforward to imagine that the noise will be not removed. However, in a first look at the same filter power spectrum, we could imagine that the bias is eliminated. In the referred chapter, we show that this is not true, and this could be a source of error in the output signal. We give also a possible solution to this problem.

The bias can be easily removed, but the same does not happen to the other question, i.e., if the signal and the noise overlap in frequency. Besides, the error of FLC-based filters also increases with the number of harmonics that are used to recover the cardiac signal, as it is shown in Chapter III. To deal with these problems, we used an algorithm different from the one used to implement the FLC-based filters. This is called independent component analysis (ICA). As one can see in the results of simulations we carried out, shown Chapter V, ICA is a powerful tool to separate frequency overlapping signals. In addition, ICA algorithms are almost as simple as the LMS to implement. But the difference between those two algorithms is that ICA ones are based on higher order statistics, i.e., they explore the higher order moments of the studied signals. Since they are not based on amplitude estimation, given the frequency of the signal, which is the case of the FLC-based filters, the problem with the number of harmonics disappeared by the implementation of an ICA algorithm.

Despite that, the ICA algorithms proposed until now are still adaptive, meaning that the tradeoff discussed before between error and learning time appear again. Besides, the ICA algorithms are dealing with the estimation of higher order moments, i.e., they are usually slower than the LMS algorithm. In order to make the algorithm faster, we used a neural network associated with a time-varying step-size. The idea is similar to the one used in Chapter III, but the difference here is that this time-varying step-size works in the limit, meaning that above it, the algorithm would probably diverge.

However, the problems are not all over and there is still enough room for research in this area. For example, a problem with ICA is the scaling. The output signals are usually a scaled version of the desired one. Even though we could use some normalization as we proposed in Chapter V, we cannot trust the output to, for example, measure the stroke volume. In this case, the SFLC is more appropriate. Therefore, one idea is try to use the SFLC and ICA together. These and other questions are discussed in Chapter VI.


目次に戻る