面向反馈神经网络的进化特征提取算法研究

 2022-01-17 11:01

论文总字数:20679字

目 录

第 1章 绪论 3

1.1 研究背景 3

1.2 研究现状 3

1.3 本文工作 4

第2章 特征提取的技术介绍 4

2.1 线性特征提取 5

2.1.1 主成份分析 5

2.1.2 线性判别分析 5

2.1.3 基于特征提取的RELIEF算法 6

2.2 非线性特征抽取 6

2.2.1 核方法 7

2.2.2 流形学习方法 7

第3章 进化(EC)计算研究 7

3.1 遗传算法 7

3.1.1 计算示例 8

3.2 思维进化算法 10

3.2.1 思维进化算法的基本思路 11

3.2.2 思维进化算法特点 11

第4章 思维进化算法优化BP神经网络 11

4.1 BP神经网络介绍 11

4.1.1 BP神经网络的概述 12

4.1.2 基于BP神经网络进化算法特征提取 14

4.2 思维进化算法实现 15

4.2.1 初始种群产生 15

4.2.2 子种群产生 16

4.3 MATLAB实现 16

4.3.1 创建并训练BP神经网络 17

4.3.2 结果分析 18

第5章 结论 20

参考文献 20

面向反馈神经网络进化特征提取算法研究

杨良

,China

Abstract:The contradiction between full-scale and dimensional disasters is the primary problem that needs to be resolved in the era of network situational awareness in the era of big data. Feature extraction has always been the mainstream dimension reduction method. Feature extraction is an important research direction in the field of pattern recognition. Feature extraction can improve the efficiency and effectiveness of classification. However, existing algorithms do not perform well for high-dimensional nonlinear data; deep learning is a type of A learning algorithm with multiple layers of non-linear mapping can complete the approximation of complex functions, but it is very sensitive to hidden layer related parameters. To solve the above problems, the idea of ​​evolutionary algorithm is introduced into deep learning, and a feature extraction algorithm based on thinking evolutionary learning is proposed. The algorithm uses genetic algorithms and evolutionary strategies to achieve the characteristics of global search and optimization, and optimizes the deep learning structure and related parameters. Theoretical analysis and experimental results have proved the effectiveness of the algorithm.

We know that the transmission of human brain information and the response to external stimuli are controlled by neurons. The human brain consists of billions of such neurons. These neurons are not isolated and closely related. Each neuron is connected to an average of several thousand neurons and thus constitutes a neural network of the human brain. Stimuli propagate in the neural network following certain rules. A neuron does not react every time it receives stimulation from other nerves. It will first accumulate the stimuli coming from its neighboring neurons and, at a certain time, will generate its own stimuli and pass it on to some of its adjacent neurons. The billions of neurons that work in this way constitute the reaction of the human brain to the outside world. The mechanism by which the human brain learns external stimuli is by regulating the connections between these neurons and their strength. Of course, the above is actually a simplified biological model of the true neural work of the human brain. Using this simplified biological model, it can be extended to machine learning and described as an artificial neural network. The BP neural network is one of them.

In this paper, the BP neural network is optimized by the mind-evolution algorithm, and the data dimension is further reduced. The data in the ‘data.mat’ is extracted at random for programming, and the effect of effectively improving the feature accuracy is achieved.

Key words:Feature Extraction;Evolutionary Algorithm; BP Neural Network

1 绪论

研究背景

随着计算机科学的发展,人们借助适者生存这一进化规则,把计算机科学和生物进化相结合,慢慢地发展成一种启发式的随机搜索算法,其算法被称为进化算法(Evolution Computation,EC),主要的进化算法有:进化策略、遗传算法、进化规划。

与传统算法相比,进化算法的主要使用的是是群体搜索。进化算法已经被成功的应用于解决复杂的组合机器学习、图像处理[2]、优化问题[3]、人工智能[4]等方面。但是进化算法存在一些如早熟、收敛速度慢等问题,所以本文的目标是通过使用思维进化算法来优化BP神经网络。

研究现状

大数据时代的到来对于大规模复杂网络的情景意识既是机遇也是挑战。一方面,各种信息可以全面反映网络的运行状态;另一方面,大量异构数据增加了数据处理的负担,极大地制约了网络态势感知的有效性。为了解决信息全面性与维度灾害之间的矛盾,我们需要减少数据空间的维度。特征提取是通过合并特征而不是删除特征来减少特征维度。这种方法可以有效地减少特征空间。常用的线性特征提取方法主成分分析(PCA),基于Fisher判别式的线性判别分析[5](LDA)和多位数比例缩放[6](MDS)。然而,高维数据空间的样本通常具有非线性结构,并且线性方法难以完全提取信息。目前,由于计算单元数量有限,具有浅结构的各种非线性学习算法在表示复杂函数的能力方面受到限制。

目前,神经网络归纳既需要参数学习,也需要结构学习,即学习权重值和适当的节点和链接拓扑。目前解决这一任务的方法分为两大类。构造性算法最初假定一个简单的网络,并按需要添加节点和链接,而破坏性方法则从一个大型网络开始,并删除多余的组件。尽管这些算法解决了拓扑采集的问题,但它们以非常有限的方式进行,因为它们单调地修改网络结构,所以建设性和破坏性方法限制了可用体系结构的遍历,因为一旦体系结构被发现并被确定为不足,则采用新的体系结构,并且旧的体系结构不可用。而且,这些方法通常只使用单个预定义的结构修改,例如“添加完全连接的隐藏单元”来生成连续的拓扑,这是一种结构性爬山形式,容易陷入结构性局部最小值。此外,建设性和破坏性算法使简化架构假设以促进网络归纳。例如,Ash只允许前馈网络; Fahlman假设复发的限制性形式,Chen等人只探索了完全连接的拓扑,这造成了一种情况,任务被迫进入架构而不是适合任务的架构。

剩余内容已隐藏,请支付后下载全文,论文总字数:20679字

相关图片展示:

您需要先支付 80元 才能查看全部内容!立即支付

该课题毕业论文、开题报告、外文翻译、程序设计、图纸设计等资料可联系客服协助查找;