###
工程科学与技术:2014,46(2):72-78
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
面向人体动作识别的局部特征时空编码方法
(国防科学技术大学 信息系统与管理学院)
Local Feature Space Time Coding for Human Action Recognition
(Info. System and Management College,National Univ. of Defense Technol.)
摘要
图/表
参考文献
相似文献
本文已被:浏览 2403次   下载 0
投稿时间:2013-05-21    修订日期:2013-10-13
中文摘要: 为克服BoF特征袋模型在视频人体动作识别中忽视局部特征间时空位置关系的问题,提出局部特征时空编码方法。将局部特征时空位置坐标引入特征编码中,直接对它们的时空位置关系建模。首先,将局部特征投影到人体运动子时空域,获得局部特征的时空位置坐标;然后,在特征编码阶段同时对局部特征的出现信息和时空位置坐标进行编码;最后,采用特征池提取该时空域内局部特征的统计信息用于动作分类。为进一步提高性能,多尺度时空编码和局部约束时空编码方法也一并被提出,并在分类阶段采用局部约束块稀疏表示分类方法提高动作识别精度。在KTH、Weizmann、UCF sports等标准测试集的实验表明,本文算法能够有效表示局部特征间时空位置关系,提高动作识别精度。
中文关键词: 模式识别  动作识别  特征袋  稀疏表示
Abstract:In order to overcome the limitation of Bag of Features (BoF),which ignores the space time relationship of local features in human action recognition,a space time coding (STC) method for local feature was proposed by involving the space time locations of local features into feature coding phase to directly model their space time relationship.First,the local features were projected into a sub space-time-volume (sub-STV) to obtain their space time coordinates.Second,their appearance information and space time locations were encoded simultaneously.After that,the statistics results generated by feature pooling upon these codes were utilized for action classification.To achieve better performance,the multi-scale STC and locality-constrained STC were also proposed.In action classification,a locality-constrained block sparse representation classifier (LBSRC) was adopted to improve the action recognition accuracy.The experimental results on KTH,Weizmann,and UCF sports benchmark datasets showed that the proposed methods can effectively represent the space time relationship of local features and improve the action recognition accuracy.
文章编号:201300495     中图分类号:    文献标志码:
基金项目:国家自然科学基金资助项目(61175006;61275016)
作者简介:
引用文本:
王斌,刘煜,王炜,徐玮,张茂军.面向人体动作识别的局部特征时空编码方法[J].工程科学与技术,2014,46(2):72-78.
Wang Bin,Liu Yu,Wang Wei,Xu Wei,Zhang Maojun.Local Feature Space Time Coding for Human Action Recognition[J].Advanced Engineering Sciences,2014,46(2):72-78.