Event-Triggered Proximal Online Gradient Descent Algorithm for Parameter Estimation (2024)

research-article

Authors: Yaoyao Zhou, Gang Chen

IEEE Transactions on Signal Processing, Volume 72

Pages 2594 - 2606

Published: 13 May 2024 Publication History

Metrics

Total Citations0Total Downloads0

Last 12 Months0

Last 6 weeks0

New Citation Alert added!

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

Manage my Alerts

New Citation Alert!

Please log in to your account

    • View Options
    • References
    • Media
    • Tables
    • Share

Abstract

The constrained composite-convex parameter estimation problem on the networked system, where the composite-convex function consists of a sum of node-specific smooth loss functions and a nonsmooth regularizer, is investigated in this paper. To reduce the communication burden, the event-triggered mechanism is introduced and the novel event-triggered proximal online gradient descent algorithm (EPOGDA) is proposed. The analysis shows that if the event-triggered threshold converges to zero as time tends to infinity and the cumulative difference between consecutive optimal values is sublinear, the dynamic regret of EPOGDA is sublinear. Further, we extend the proposed EPOGDA to the gradient-free scenarios, where the gradients are estimated using the Gaussian smoothed gradient estimator (GSGE). The GSGE-EPOGDA is presented and analyzed, which does not lead to performance degradation as compared to EPOGDA. Finally, the advantages of EPOGDA and GSGE-EPOGDA are verified on a distributed multi-sensor network.

References

[1]

J. Chen and I. D. Schizas, “Online distributed sparsity-aware canonical correlation analysis,” IEEE Trans. Signal Process., vol. 64, no. 3, pp. 688–703, Feb. 2016.

Digital Library

[2]

G. Chen and Y. Zhou, “Dynamic estimation over distributed sensing network with communication delays,” IEEE Trans. Ind. Inform., vol. 20, no. 4, pp. 5449–5458, Apr. 2024.

[3]

K. D. Polyzos, Q. Lu, and G. B. Giannakis, “Ensemble Gaussian processes for online learning over graphs with adaptivity and scalability,” IEEE Trans. Signal Process., vol. 70, pp. 17–30, 2022.

[4]

R. T. Money, J. P. Krishnan, and B. Beferull-Lozano, “Sparse online learning with kernels using random features for estimating nonlinear dynamic graphs,” IEEE Trans. Signal Process., vol. 71, pp. 2027–2042, 2023.

Digital Library

[5]

Z. Sun and M. R. Nakhai, “An online learning algorithm for distributed task offloading in multi-access edge computing,” IEEE Trans. Signal Process., vol. 68, pp. 3090–3102, 2020.

[6]

D. Jin, J. Chen, C. Richard, and J. Chen, “Online proximal learning over jointly sparse multitask networks with $\infty,1$ regularization,” IEEE Trans. Signal Process., vol. 68, pp. 6319–6335, 2020.

Digital Library

[7]

Y. Zhou and G. Chen, “Online distributed detection of sensor networks with delayed information,” J. Franklin Inst., vol. 360, no. 15, pp. 11000–11031, 2023.

[8]

T. Chen, A. Mokhtari, X. Wang, A. Ribeiro, and G. B. Giannakis, “Stochastic averaging for constrained optimization with application to online resource allocation,” IEEE Trans. Signal Process., vol. 65, no. 12, pp. 3078–3093, Jun. 2017.

Digital Library

[9]

C. Li, X. Yu, T. Huang, and X. He, “Distributed optimal consensus over resource allocation network and its application to dynamical economic dispatch,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 6, pp. 2407–2418, Jun. 2018.

[10]

M. Akbari, B. Gharesifard, and T. Linder, “Distributed online convex optimization on time-varying directed graphs,” IEEE Trans. Control Netw. Syst., vol. 4, no. 3, pp. 417–428, Sep. 2017.

[11]

X. Li, X. Yi, and L. Xie, “Distributed online optimization for multi-agent networks with coupled inequality constraints,” IEEE Trans. Autom. Control, vol. 66, no. 8, pp. 3575–3591, Aug. 2021.

[12]

D. S. Kalhan et al., “Dynamic online learning via Frank-Wolfe algorithm,” IEEE Trans. Signal Process., vol. 69, pp. 932–947, 2021.

[13]

S. Shahrampour and A. Jadbabaie, “Distributed online optimization in dynamic environments using mirror descent,” IEEE Trans. Autom. Control, vol. 63, no. 3, pp. 714–725, Mar. 2018.

[14]

P. Nazari, D. A. Tarzanagh, and G. Michailidis, “DAdam: A consensus-based distributed adaptive gradient method for online optimization,” IEEE Trans. Signal Process., vol. 70, pp. 6065–6079, 2022.

[15]

P. Sharma, P. Khanduri, L. Shen, D. J. Bucci, and P. K. Varshney, “On distributed online convex optimization with sublinear dynamic regret and fit,” in Proc. 55th Asilomar Conf. Signals, Syst., Comput., 2021, pp. 1013–1017.

[16]

Y. Xiong, X. Li, K. You, and L. Wu, “Distributed online optimization in time-varying unbalanced networks without explicit subgradients,” IEEE Trans. Signal Process., vol. 70, pp. 4047–4060, 2022.

[17]

S. D. Sharma and K. Rajawat, “Optimized gradient tracking for decentralized online learning,” IEEE Trans. Signal Process., vol. 72, pp. 1443–1459, 2024.

Digital Library

[18]

Y. Pang and G. Hu, “Randomized gradient-free distributed online optimization via a dynamic regret analysis,” IEEE Trans. Autom. Control, vol. 68, no. 11, pp. 6781–6788, Nov. 2023.

[19]

C. Wang, S. Xu, and D. Yuan, “Distributed online stochastic-constrained convex optimization with bandit feedback,” IEEE Trans. Cybern., vol. 54, no. 1, pp. 63–75, Jan. 2024.

[20]

M. Schmidt, N. Le Roux, and F. Bach, “Convergence rates of inexact proximal-gradient methods for convex optimization,” in Proc. Adv. Neural Inf. Process. Syst. 24: 25th Annu. Conf. Neural Inf. Process. Syst. (NIPS), 2011, pp. 1458–1466.

[21]

S. Salzo and S. Villa, “Inexact and accelerated proximal point algorithms,” J. Convex Anal., vol. 19, no. 4, pp. 1167–1192, 2012.

[22]

S. Villa, S. Salzo, L. Baldassarre, and A. Verri, “Accelerated and inexact forward-backward algorithms,” SIAM J. Optim., vol. 23, no. 3, pp. 1607–1633, 2013.

Digital Library

[23]

R. Dixit, A. S. Bedi, R. Tripathi, and K. Rajawat, “Online learning with inexact proximal online gradient descent algorithms,” IEEE Trans. Signal Process., vol. 67, no. 5, pp. 1338–1352, Mar. 2019.

Digital Library

[24]

R. Dixit, A. S. Bedi, and K. Rajawat, “Online learning over dynamic graphs via distributed proximal gradient algorithm,” IEEE Trans. Autom. Control, vol. 66, no. 11, pp. 5065–5079, Nov. 2021.

[25]

S. S. Kia, J. Cortes, and S. Martinez, “Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication,” Automatica, vol. 55, no. 5, pp. 254–264, 2015.

Digital Library

[26]

S. Liu, L. Xie, and D. E. Quevedo, “Event-triggered quantized communication-based distributed convex optimization,” IEEE Trans. Control Netw. Syst., vol. 5, no. 1, pp. 167–178, Mar. 2018.

[27]

Y. Kajiyama, N. Hayashi, and S. Takai, “Distributed subgradient method with edge-based event-triggered communication,” IEEE Trans. Autom. Control, vol. 63, no. 7, pp. 2248–2255, Jul. 2018.

[28]

C. Liu, H. Li, Y. Shi, and D. Xu, “Distributed event-triggered gradient method for constrained convex minimization,” IEEE Trans. Autom. Control, vol. 65, no. 2, pp. 778–785, Feb. 2020.

[29]

X. Cao and T. Basar, “Decentralized online convex optimization with event-triggered communications,” IEEE Trans. Signal Process., vol. 69, pp. 284–299, 2021.

Digital Library

[30]

O. Shamir, “An optimal algorithm for bandit and zero-order convex optimization with two-point feedback,” J. Mach. Learn. Res., vol. 18, no. 5, pp. 1–11, 2017.

[31]

P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.-J. Hsieh, “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” in Proc. 10th ACM Workshop Artif. Intell. Secur. (AISec), pp. 15–26, 2017.

Digital Library

[32]

A. D. Flaxman, A. T. Kalai, and H. B. McMahan, “Online convex optimization in the bandit setting: Gradient descent without a gradient,” Proc. Annu. ACM-SIAM Symp. Discrete Algorithms, 2005, pp. 385–394.

[33]

A. Agarwal, O. Dekel, and L. Xiao, “Optimal algorithms for online convex optimization with multi-point bandit feedback,” in Proc. 23rd Conf. Learn. Theory, 2010, pp. 28–40.

[34]

D. Yuan, Y. Hong, D. W. C. Ho, and S. Xu, “Distributed mirror descent for online composite optimization,” IEEE Trans. Autom. Control, vol. 66, no. 2, pp. 714–729, Feb. 2021.

[35]

J. Li, C. Gu, Z. Wu, and T. Huang, “Online learning algorithm for distributed convex optimization with time-varying coupled constraints and bandit feedback,” IEEE Trans. Cybern., vol. 52, no. 2, pp. 1009–1020, Feb. 2022.

[36]

Y. Nesterov and V. Spokoiny, “Random gradient-free minimization of convex functions,” Found. Comput. Math., vol. 17, no. 2, pp. 527–566, 2017.

Digital Library

[37]

S. Modalavalasa, U. K. Sahoo, A. K. Sahoo, and S. Baraha, “A review of robust distributed estimation strategies over wireless sensor networks,” Signal Process., vol. 188, no. 11, pp. 1–15, 2021.

[38]

A. Nedić and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1, pp. 48–61, Jan. 2009.

Recommendations

  • Stochastic proximal gradient descent with acceleration techniques

    NIPS'14: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1

    Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods for solving regularized risk minimization problems in machine learning and statistics. In this paper, we propose and analyze an accelerated variant of ...

    Read More

  • Another Conjugate Gradient Algorithm with Guaranteed Descent and Conjugacy Conditions for Large-scale Unconstrained Optimization

    In this paper, we suggest another accelerated conjugate gradient algorithm for which both the descent and the conjugacy conditions are guaranteed. The search direction is selected as a linear combination of the gradient and the previous direction. The ...

    Read More

  • Nonsmooth Steepest Descent Method by Proximal Subdifferentials in Hilbert Spaces

    In this paper, we first study a nonsmooth steepest descent method for nonsmooth functions defined on a Hilbert space and establish the corresponding algorithm by proximal subgradients. Then, we use this algorithm to find stationary points for those ...

    Read More

Comments

Information & Contributors

Information

Published In

Event-Triggered Proximal Online Gradient Descent Algorithm for Parameter Estimation (1)

IEEE Transactions on Signal Processing Volume 72, Issue

2024

3394 pages

ISSN:1053-587X

Issue’s Table of Contents

1053-587X © 2024 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.

Publisher

IEEE Press

Publication History

Published: 13 May 2024

Qualifiers

  • Research-article

Contributors

Event-Triggered Proximal Online Gradient Descent Algorithm for Parameter Estimation (2)

Other Metrics

View Article Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Total Citations

  • Total Downloads

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0

Reflects downloads up to 18 Aug 2024

Other Metrics

View Author Metrics

Citations

View Options

View options

Get Access

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

Full Access

Get this Publication

Media

Figures

Other

Tables

Event-Triggered Proximal Online Gradient Descent Algorithm for Parameter Estimation (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Lidia Grady

Last Updated:

Views: 6071

Rating: 4.4 / 5 (45 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Lidia Grady

Birthday: 1992-01-22

Address: Suite 493 356 Dale Fall, New Wanda, RI 52485

Phone: +29914464387516

Job: Customer Engineer

Hobby: Cryptography, Writing, Dowsing, Stand-up comedy, Calligraphy, Web surfing, Ghost hunting

Introduction: My name is Lidia Grady, I am a thankful, fine, glamorous, lucky, lively, pleasant, shiny person who loves writing and wants to share my knowledge and understanding with you.