|
1.IntroductionSpace-time adaptive processing (STAP) can effectively suppress strong ground/sea clutter and improve the moving target indication performance for airborne/spaceborne radar systems.1 In full-dimension STAP algorithms, however, a large number of independent and identically distributed (I.I.D.) training snapshots are required to yield an average signal-to-clutter-noise ratio (SCNR) loss of .2 Moreover, full-dimension STAP algorithms have a high system complexity and require many memory elements.3 In practical applications, it is generally difficult to satisfy these requirements. To date, many algorithms have been proposed to overcome the drawbacks of full-dimension STAP algorithms. Reduced-rank STAP algorithms can reduce the clutter space while maintaining the performance of fully STAP algorithms.4,5 Consequently, the required number of snapshots can be reduced. However, eigenvalue decomposition is used, which is computationally expensive. To reduce the computational expense and the number of training snapshots simultaneously, some typical reduced-dimension STAP algorithms have been proposed, such as the joint domain localized approach, auxiliary channel processing, etc.6–8 However, the nonadaptive selection of the reduced-dimension projection matrix, which relies on intuitive experience, results in a performance degradation to a certain extent.2 The sparsity of the filter coefficients in STAP has recently been studied, and the theoretical framework for sparsity-based STAP algorithms using the -regularized constraint, which is the so-called least absolute shrinkage and selection operator (LASSO), has been established.9–12 The classical algorithms for solving the LASSO problem adopt convex optimization, e.g., the interior point algorithm, to obtain a sparse solution. The complexity of the algorithms can be very high when the size of the problem is large, which is not pragmatic in practice. To effectively solve the optimization problem, the -regularized recursive least-squares STAP (RLS-STAP) algorithm,13 the -regularized least-mean-square STAP algorithm,14 and the homotopy-STAP algorithm15 have been proposed. Compared with conventional STAP methods, sparsity-based STAP techniques have been shown to provide high resolution and exhibit better performance than conventional STAP algorithms.16 The alternating direction method of multipliers (ADMM) is a technique used to combine the decomposability of dual ascent with the rapid convergence speed of the method of multipliers.17,18 This technique is well suited for solving the optimization problems of the constraint, particularly large-scale problems.19 The ADMM technique can converge within a few tens of iterations, which is acceptable in practical use.20 In this study, according to the optimal criterion of minimizing the mean-square error, we propose an algorithm based on the ADMM technique to solve the -regularized STAP problem. The proposed method provides better performance with a small number of I.I.D. training snapshots and without a large number of calculations. The reminder of this paper is organized as follows. The system model of the generalized side-lobe canceler (GSC) form of the sparsity-based STAP is introduced in Sec. 2. In Sec. 3, the theory of the ADMM algorithm is introduced, and the -regularized ADMM-STAP algorithm is proposed. The associated optimization problem is formulated and solved analytically. The performance improvement of the proposed algorithm is shown in Sec. 4. Section 5 provides the conclusion. Notation: In this paper, a variable, a column vector, and a matrix are represented by a lowercase letter, a lowercase bold letter, and a capital bold letter, respectively. The operations of transposition, complex conjugation, and conjugate transposition are denoted by , , and , respectively. The symbol ; denotes the Kronecker product, and the symbol denotes the -norm operator. denotes the expected value of , indicates the absolute value of , and . is the component-wise sign function.13 2.Background and Problem Formulation2.1.System ModelThe STAP technique is known for its ability to suppress clutter energy interference while detecting moving targets. Consider an airborne radar system equipped with a uniform linear array (ULA) consisting of receiving elements, as shown in Fig. 1. The radar transmits identical pulses at a constant pulse repetition frequency (PRF) during a coherent processing interval (CPI), where is the pulse repetition interval. The received signal from the range bin of interest is represented as , where is the target vector, is the clutter vector, and is the thermal noise vector with noise power on each channel and pulse. The space-time clutter vector can be represented as21 where denotes the number of clutter patches in the range bin of interest and denotes the random complex reflection coefficient. and are the Doppler frequency and spatial frequency for the ’th clutter patch, respectively, where is the wavelength and is the innersensor spacing of the ULA. is the space-time steering vector, which is defined as a Kronecker product of the temporal and spatial steering vectors, i.e., , where The target vector is , where and . is the radial velocity of the moving target, and represents the angle of arrival (AOA) of the target. Note that in the following, is rewritten as for convenience.To clearly illustrate how the STAP method works, the GSC form of the STAP method is shown in Fig. 2. is the signal blocking matrix, which satisfies and . Generally, can be obtained by singular value decomposition (SVD): After the transformation by , clutter data are available. In the full-dimension STAP, all the data are selected to cancel the clutter. The output is where and . is the clutter covariance matrix, and is the cross-correlation vector between and . The output clutter power can be computed as where is the input covariance matrix. The output SCNR can be expressed as Maximizing the output SCNR is equivalent to maximizing the detection probability. However, and are unknown in practice, and the secondary training snapshots are required to estimate these parameters.15 The best performance can be achieved if there are sufficient I.I.D. training snapshots. However, in many practical cases, it is impossible to obtain sufficient snapshots, and the performance degrades significantly.2.2.Sparsity-Based STAPAccording to the STAP theory, it has been shown that the rank of clutter covariance is far lower than the DOFs of the system.22,23 Consequently, some RR-STAP and RD-STAP algorithms have been used to reduce the filter length, i.e., the filter coefficient vector obtained by full-dimension STAP is sparse.14 Hence, in the GSC form of the sparsity-based STAP algorithm (see Fig. 2), the filter coefficient vector can be replaced by , where and denote a sparse vector. Then, we obtain The output of the sparsity-based STAP is Hence, the output clutter power for the sparsity-based STAP can be computed as where is the weight error vector caused by the sparsity constraint. Note that the target signal power is not affected by the sparsity constraint. The output SCNR can be expressed as Hence, the aim is to minimize the mean-square error . The objective function of the minimization problem can be rewritten as is sparse, i.e., most of its elements are considerably smaller than the others. Hence, the minimization problem can be expressed as where is the regularization parameter for regulating the sparseness of . However, the -norm problem is nonconvex. Consequently, it is intractable even for optimization problems with a moderate size. Equation (12) can be further programmed as an LASSO algorithm In contrast to Eq. (12), Eq. (13) is convex and can be solved by convex optimization algorithms, such as the interior point method (IPM). The complexity of IPM-STAP can be very high when the size of the problem is large, which is not pragmatic in practice.3.Proposed ℓ1-Regularized STAP Algorithm3.1.Variable SplittingIn general, the ADMM algorithm can converge rapidly when a modest-accuracy result is acceptable. Fortunately, this is the case for the parameter estimation problem in the STAP application that we are considering. For statistical problems, solving a parameter estimation problem to a very high accuracy often yields little improvement.19 The ADMM-STAP algorithm is based on the algorithm of variable splitting, i.e., we split the variable into a pair of variables, say, and , and add a constraint that the two variables are equal. Moreover, the objective function is split as the sum of two functions, and then we minimize the sum of the two functions. Explicitly, Eq. (13) can be rewritten in the ADMM form The problems of Eqs. (13) and (14) are clearly equivalent. In many cases, it is easier to solve the constrained problem Eq. (14) than the original unconstrained problem. As in the method of multipliers, the augmented Lagrangian function is formed as19,20 where is the augmented Lagrangian parameter and is a vector of Lagrange multipliers.3.2.ℓ1-Regularized ADMM-STAPDefine the residual and the scaled dual variable as and , respectively. Then, we have Subsequently, the ADMM-STAP algorithm can be rewritten in a convenient form where is the residual at the ’th iteration and is the summation of the residuals. In the first line of Eq. (17), the objective is to minimize a strictly convex quadratic function, and the solution can be easily obtained as As mentioned, and are unknown in practice, and they can be estimated as and , where denotes the number of snapshots that are used. Moreover, and , where denotes the ’th space-time snapshot.13–15The solution of Eq. (18) can be obtained directly, i.e., noniteratively. However, it is impractical because the inversion of has a high computational complexity of . Note that, according to Fig. 3, the clutter covariance matrix constructed by the training snapshots with regard to the current detecting snapshot can be written as where is constructed by the training snapshots with regard to the previous detecting snapshot. Denote ; then, according to the matrix inversion lemma,24 we obtain It is clear that . Hence, the computational complexity can be reduced to . A full analysis of the computational complexity is presented in Table 1.Table 1Computational complexity.
In the second line of Eq. (17), the -update can be represented as Although the absolute value function is not differentiable, a simple closed-form solution can easily be obtained. Explicitly, the solution is where is the soft-thresholding operator. The soft-thresholding operator is essentially a shrinkage operator, which moves a point toward zero.In the ADMM-STAP algorithm, and are updated alternately, which accounts for the term alternating direction. The reasonable stopping criteria are that the primal and dual residuals must be small, where and are thresholds that are chosen by absolute and relative criteria A reasonable value for is , and the choice of depends on the scale of the typical variable values. The detailed iterative procedure of ADMM-STAP is shown in Fig. 4.3.3.Analysis of ConvergenceA proof of the convergence result is presented in this section. First, we begin our proof by presenting the following theorem. Theorem 1(Eckstein–Bertsekas):25 Consider the problem in the case where the functions and are closed, proper, and convex and has a full column rank. Let and be two sequences such thatAssume that there are three sequences , , and that satisfy Then, if Eq. (25) has an optimal solution , the sequence converges to this solution, i.e., .First, since Eq. (14) is a particular instance when , the full-rank condition in Theorem 1 can be satisfied. Second, it is clear that and in Eq. (14) are closed, proper, and convex. Moreover, the sequences , , and generated by Eq. (17) satisfy the conditions of Eq. (27) in a strict sense (). Hence, the convergence is guaranteed. 3.4.Analysis of Computational ComplexityA comparison of the computational complexities of four STAP algorithms, namely, the conventional sample matrix inversion (SMI) STAP,2 -regularized RLS-STAP,14 -regularized online coordinate descent (OCD) STAP,26 and the proposed ADMM-STAP algorithms, is presented in Table 1. The computational complexity is measured by the number of complex multiplications and additions. As shown in Table 1, the ADMM-STAP algorithm has a computational complexity of , where is the number of iterations. According to the simulation in Sec. 4, the algorithm can converge to an acceptable solution within a few tens of iterations, i.e., would be less than and . Hence, the ADMM-STAP algorithm has the lowest level of computational complexity. 4.Simulation ResultsThe simulation parameters for the ground moving target indication application are listed in Table 2: a radar system equipped with a side-looking ULA is employed, and the elements are spaced half a wavelength apart, i.e., . Additive noise is modeled as spatially and temporally independent complex Gaussian noise with zero mean and unit variance. ; hence, . All the results are obtained from the average of 100 independent Monte–Carlo simulations. Table 2Simulation parameters for airborne radar.
4.1.Setting of Regularization ParameterThe regularization parameter provides a tradeoff between the SCNR steady-state performance and the convergence speed. Although it is clear that the value of should be proportional to the noise power and be inversely proportional to the rank of the clutter covariance matrix, it is still difficult to determine the optimal value. Adjusting the regularization parameter adaptively is an interesting research area (e.g., Refs. 13 and 14). However, this area is not the main focus of our paper. In this paper, the regularization parameter is selected from a fixed set . The output SCNR versus the number of snapshots that are used with different values of the regularization parameter is shown in Fig. 5. In this simulation, we assume that the signal of the moving target impinges the array from a DOA of 90 deg and that the radial velocity of the moving target is (the Doppler frequency of the moving target is nearly 231 Hz). The results in Fig. 5 indicate that (i) the value of is crucial to the output SCNR performance, and there is a reasonable range of values, i.e., , that can improve the convergence speed and the output SCNR steady-state performance simultaneously; (ii) the output SCNR is degraded when is too large since the filter weight vector is shrunk to zero; and (iii) the output SCNR performance is not considerably improved when is too small. In this case, the output SCNR performance is nearly similar to that of the conventional STAP algorithm. The output SCNR performance versus the Doppler frequency of the moving target at a DOA of 90 deg is shown in Fig. 6. The range of potential Doppler frequency is from − to 500 Hz, and 60 snapshots are used to optimize the filter vector. The same conclusion can be obtained. This figure shows that the ADMM-STAP algorithm with provides a satisfactory output SCNR performance. The number of iterations with different values of is shown in Fig. 7. As shown, if we choose from an appropriate range (), then the ADMM-STAP algorithm can converge rapidly within a few tens of iterations, which is acceptable in practice. Otherwise, the number of iterations increases significantly, and the iteration output cannot converge to the optimal solution leading, to a performance degradation to a certain extent. 4.2.Comparison with Other AlgorithmsIn this section, we will compare the output SCNR performance of our proposed algorithm with that of IPM-STAP, OCD-STAP, and RLS-STAP algorithms. The regularization parameter is set to 1 for all the algorithms, and the other parameters are the same as in the previous simulations. The output SCNR performances versus the number of used snapshots and the target Doppler frequency are compared in Figs. 8 and 9. As shown in these figures, we can see that (i) the output SCNR performance of the IPM-STAP algorithm is superior to that of the RLS-STAP and OCD-STAP algorithms. However, it is achieved at a high computational cost and (ii) the output SCNR performance of the ADMM-STAP algorithm can outperform that of the IPM-STAP algorithm, which supports our previous conclusion that optimizing the problem of parameter estimation to a high accuracy generally yields no improvement. 5.ℓ1-Regularized STAP with Mountaintop DataThe performance of the -regularized STAP approaches is verified here using the Mountaintop data set (data No. t38pre01v1) acquired with the experimental radar system RSTER (radar surveillance technology experimental radar) sponsored by the Advanced Research Projects Agency. The Mountaintop program is devoted to supporting the mission requirements of next-generation airborne early warning platforms and to supporting the evaluation of STAP algorithms. The antenna for the system is a 5-m wide by 10-m high horizontally polarized array composed of 14 column elements. The CPI pulse number is 16, the antenna array spacing is 0.333 m, the PRF is 625 Hz, the carrier frequency is 435 MHz, and the bandwidth is 500 kHz. The transmit beam is steered to illuminate a mountain range (a large clutter scatter). The data set is divided into two subsets in our experiment. The first subset, including 100 snapshots, is used to train the STAP filters. The second subset, including 100 snapshots, is used to test the performance. Two simulated moving targets are added to the test data subset. The signal of the first target impinges the array from a DOA of , and the Doppler frequency is 62.5 Hz. The signal of the second target impinges the array from a DOA of 20 deg, and the Doppler frequency is 187.5 Hz. Hence, the first target can essentially be regarded as a ground moving vehicle in the mountain, and the second target can be regarded as an aircraft near the mountain. The minimum variance distortionless response (MVDR) spectra of the two subsets are shown in Fig. 10. The improvement factor (IF) performance, which is defined as the ratio of the output SCNR to the input SCNR, is investigated in Fig. 11. The regularization parameter is set to 1 for all the algorithms. As shown, the IF performance of the proposed ADMM-STAP approach substantially outperforms that of the other approaches. Hence, the effectiveness of the proposed approach is confirmed by an experimental multichannel radar system RSTER. 6.ConclusionsIn this paper, we proposed a sparsity-based approach based on an -regularized constraint to accelerate the convergence speed of STAP. The optimization problem with an additional -regularized constraint was solved using the ADMM, and the detailed iterative procedure of ADMM-SATP was derived. Through the examples, it was demonstrated that the proposed method can effectively decrease the required number of secondary snapshots and provide better performance than the -regularized OCD-STAP and -regularized RLS-STAP methods. AcknowledgmentsThe authors thank the National Natural Science Foundation of China under Grant No. 61101178 and the China Scholarship Council for their support. ReferencesH. Wang et al.,
“Robust waveform design for MIMO-STAP to improve the worst-case detection performance,”
EURASIP J. Adv. Signal Process., 2013
(1), 1
–8
(2013). http://dx.doi.org/10.1186/1687-6180-2013-52 Google Scholar
W. L. Melvin,
“A STAP overview,”
IEEE Aerosp. Electron. Syst. Mag., 19
(1), 19
–35
(2004). http://dx.doi.org/10.1109/MAES.2004.1263229 IESMEA 0885-8985 Google Scholar
X. Guo et al.,
“Modified reconstruction algorithm based on space-time adaptive processing for multichannel synthetic aperture radar systems in azimuth,”
J. Appl. Remote Sens., 10
(3), 035022
(2016). http://dx.doi.org/10.1117/1.JRS.10.035022 Google Scholar
R. Fa and R. C. De Lamare,
“Reduced-rank STAP algorithms using joint iterative optimization of filters,”
IEEE Trans. Aerosp. Electron. Syst., 47
(3), 1668
–1684
(2011). http://dx.doi.org/10.1109/TAES.2011.5937257 IEARAX 0018-9251 Google Scholar
R. Fa, R. C. de Lamare and L. Wang,
“Reduced-rank STAP schemes for airborne radar based on switched joint interpolation, decimation and filtering algorithm,”
IEEE Trans. Signal Process., 58
(8), 4182
–4194
(2010). http://dx.doi.org/10.1109/TSP.2010.2048212 ITPRED 1053-587X Google Scholar
R. Li et al.,
“Reduced-dimension space-time adaptive processing based on angle-Doppler correlation coefficient,”
EURASIP J. Adv. Signal Process., 2016
(97), 1
–9
(2016). http://dx.doi.org/10.1186/s13634-016-0395-2 Google Scholar
W. Zhang et al.,
“Multiple-input multiple-output radar multistage multiple-beam beamspace reduced-dimension space-time adaptive processing,”
IET Radar Sonar Navig., 7
(3), 295
–303
(2013). http://dx.doi.org/10.1049/iet-rsn.2012.0078 Google Scholar
W. Zhang et al.,
“A method for finding best channels in beam-space post-Doppler reduced-dimension STAP,”
IEEE Trans. Aerosp. Electron. Syst., 50
(1), 254
–264
(2014). http://dx.doi.org/10.1109/TAES.2013.120145 IEARAX 0018-9251 Google Scholar
K. Sun, H. Meng and F. Lapierre,
“Registration-based compensation using sparse representation in conformal-array STAP,”
Signal Process., 91
(10), 2268
–2276
(2011). http://dx.doi.org/10.1016/j.sigpro.2011.04.008 Google Scholar
K. Sun, H. Meng and Y. Wang,
“Direct data domain STAP using sparse representation of clutter spectrum,”
Signal Process., 91
(9), 2222
–2236
(2011). http://dx.doi.org/10.1016/j.sigpro.2011.04.006 Google Scholar
S. Sen,
“OFDM radar space-time adaptive processing by exploiting spatio-temporal sparsity,”
IEEE Trans. Signal Process., 61
(1), 118
–130
(2013). http://dx.doi.org/10.1109/TSP.2012.2222387 ITPRED 1053-587X Google Scholar
Z. Yang, X. Li and H. Wang,
“Space-time adaptive processing based on weighted regularized sparse recovery,”
Prog. Electromagnet. Res. B, 42 245
–262
(2012). http://dx.doi.org/10.2528/PIERB12051804 Google Scholar
Z. Yang, R. C. De Lamare and X. Li,
“L1-regularized STAP algorithm with a generalized side-lobe canceler architecture for airborne radar,”
IEEE Trans. Signal Process., 60
(2), 674
–686
(2012). http://dx.doi.org/10.1109/TSP.2011.2172435 ITPRED 1053-587X Google Scholar
Z. Gao et al.,
“L1-regularised joint iterative optimisation space-time adaptive processing algorithm,”
IET Radar Sonar Navig., 10
(3), 435
–441
(2016). http://dx.doi.org/10.1049/iet-rsn.2015.0044 Google Scholar
Z. Yang et al.,
“Sparsity-based space-time adaptive processing using complex-valued homotopy technique for airborne radar,”
IET Signal Process., 8
(5), 552
–564
(2014). http://dx.doi.org/10.1049/iet-spr.2013.0069 Google Scholar
M. Shen et al.,
“An efficient moving target detection algorithm based on sparsity-aware spectrum estimation,”
Sensors, 14
(9), 17055
–17067
(2014). http://dx.doi.org/10.3390/s140917055 SNSRES 0746-9462 Google Scholar
J. Qin, I. Yanovsky and W. Yin,
“Efficient simultaneous image deconvolution and upsampling algorithm for low-resolution microwave sounder data,”
J. Appl. Remote Sens., 9
(1), 095035
(2015). http://dx.doi.org/10.1117/1.JRS.9.095035 Google Scholar
H. Zhai et al.,
“Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images,”
J. Appl. Remote Sens., 10
(4), 046014
(2016). http://dx.doi.org/10.1117/1.JRS.10.046014 Google Scholar
S. Boyd et al.,
“Distributed optimization and statistical learning via the alternating direction method of multipliers,”
Found. Trends Mach. Learn., 3
(1), 1
–122
(2010). http://dx.doi.org/10.1561/2200000016 Google Scholar
M. Afonso, J. Bioucas-Dias and M. Figueiredo,
“Fast image recovery using variable splitting and constrained optimization,”
IEEE Trans. Image Process., 19
(9), 2345
–2356
(2010). http://dx.doi.org/10.1109/TIP.2010.2047910 IIPRE4 1057-7149 Google Scholar
Z. Yang et al.,
“On clutter sparsity analysis in space-time adaptive processing airborne radar,”
IEEE Geosci. Remote Sens. Lett., 10
(5), 1214
–1218
(2013). http://dx.doi.org/10.1109/LGRS.2012.2236639 Google Scholar
G. M. Herbert,
“Clutter modeling for space-time adaptive processing in airborne radar,”
IET Radar Sonar Navig., 4
(2), 178
–186
(2010). http://dx.doi.org/10.1049/iet-rsn.2009.0064 Google Scholar
H. Sun et al.,
“Estimation of the ocean clutter rank for HF/VHF radar space-time adaptive processing,”
IET Radar Sonar Navig., 4
(6), 755
–763
(2010). http://dx.doi.org/10.1049/iet-rsn.2009.0252 IETTAW 0018-9448 Google Scholar
B. Chen et al.,
“Quantized kernel recursive least squares algorithm,”
IEEE Trans. Neural Netw. Learn. Syst., 24
(9), 1484
–1491
(2013). http://dx.doi.org/10.1109/TNNLS.2013.2258936 Google Scholar
J. Eckstein and D. Bertsekas,
“On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators,”
Math. Program., 55
(1), 293
–318
(1992). http://dx.doi.org/10.1007/BF01581204 MHPGA4 1436-4646 Google Scholar
D. Angelosante, J. A. Bazerque and G. B. Giannakis,
“Online adaptive estimation of sparse signals: where RLS meets the L1-norm,”
IEEE Trans. Signal Process., 58
(7), 3436
–3447
(2010). http://dx.doi.org/10.1109/TSP.2010.2046897 ITPRED 1053-587X Google Scholar
BiographyLilong Qin is working toward his PhD at the National University of Defense Technology, Changsha, China, and is working with Aalto University, Espoo, Finland. He received his BS degree in information engineering and his MS degree in circuit and system from the Electronic Engineering Institute, Hefei, China, in 2010 and 2013, respectively. His current research interests include synthetic aperture radar and adaptive beamforming. Manqing Wu received his MS degree from the National University of Defense Technology, Changsha, China, in 1990. Currently, he is a professor with the China Electronics Technology Group Corporation, Beijing, China, and is a member of the Chinese Academy of Engineering. His research field is radar signal processing. |