在线咨询
中国工业与应用数学学会会刊
主管:中华人民共和国教育部
主办:西安交通大学
ISSN 1005-3085  CN 61-1269/O1

工程数学学报 ›› 2017, Vol. 34 ›› Issue (2): 209-220.doi: 10.3969/j.issn.1005-3085.2017.02.009

• • 上一篇    

基于$\alpha$混合序列的在线算法的推广性能(英)

胡小云1,   邹   斌1,   龚铁梁2,   杨   艳3   

  1. 1- 湖北大学数学与统计学学院,武汉  430062
    2- 西安交通大学数学与统计学学院,西安  710049
    3- 武汉晴川学院计算机科学学院,武汉  430204
  • 收稿日期:2015-12-07 接受日期:2016-10-18 出版日期:2017-04-15 发布日期:2017-06-15
  • 通讯作者: 杨 艳 E-mail: 41743638@qq.com
  • 基金资助:
    国家自然科学基金 (61370002; 61403132);湖北省自然科学基金 (2015CFB404).

The Generalization Ability of Online Algorithm for $\alpha$-mixing Sequence

HU Xiao-yun1,   ZOU Bin1,   GONG Tie-liang2,   YANG Yan3   

  1. 1- Faculty of Mathematics and Statistics, Hubei University, Wuhan 430062
    2- School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049 
    3- Faculty of Computer Science, Wuhan Qingchuan University, Wuhan 430204
  • Received:2015-12-07 Accepted:2016-10-18 Online:2017-04-15 Published:2017-06-15
  • Contact: Y. Yang. E-mail address: 41743638@qq.com
  • Supported by:
    The National Natural Science Foundation of China (61370002; 61403132); the Natural Science Foundation of Hubei Province (2015CFB404).

摘要:

近年来,在线算法的理论研究得到相应的重视.以前在线算法的推广界都是基于独立同分布的样本建立的.在本文中,我们跳过这个框架来研究基于$\alpha$混合序列的在线算法的推广界.我们用全变差来定义$\alpha$混合序列,而且在分析时只要求鞅收敛参数.结果是:“遗憾”可以度量在线算法的性能.与$\beta$混合序列比较,我们得到更紧的推广误差估计.

关键词: 在线算法, 独立同分布的样本, $\alpha$混合, 推广界

Abstract:

Theoretical investigation of the online algorithms has received considerable attention recently. The previous generalization bounds of online learning algorithms are usually established on the base of independent and identically distributed (i.i.d.) samples. In this paper, we go far beyond this classical framework to investigate the generalization ability of online algorithm with $\alpha$-mixing sequence. We define $\alpha$-mixing sequence by using the total variation, and our analysis requires only martingale convergence arguments. At last, ``regret" is utilized to measure the performance of online algorithms. Our main result is a tighter generalization error estimation than that of $\beta$-mixing sequence.

Key words: online algorithms, i.i.d. samples, $\alpha$-mixing sequence, generalization bound

中图分类号: