| 作 者: | 许罗迈 |
| 出版社: | 科学出版社 |
| 丛编项: | |
| 版权说明: | 本书为出版图书,暂不支持在线阅读,请支持正版图书 |
| 标 签: | 自动化基础理论 |
| ISBN | 出版时间 | 包装 | 开本 | 页数 | 字数 |
|---|---|---|---|---|---|
| 未知 | 暂无 | 暂无 | 未知 | 0 | 暂无 |
Preface
Acknowledgements
Chapter One Prologue
Chapter Two MT state of the art
2.1 MT as symbolic systems
2.2 Practical MT
2.3 Alternative technique of MT
2.3.1 Theoretical foundation
2.3.2 Translation model
2.3.3 Language model
2.4 Discussion
Chapter Three Connectionist solutions
3.1 NLP models
3.2 Representation
3.3 Phonological processing
3.4 Learning verb past tense
3.5 Part of speech tagging
3.6 Chinese collocation learning
3.7 Syntactic parsing
3.7.1 Learning active/passive transformation
3.7.2 Confluent preorder parsing
3.7.3 Parsing with fiat structures
3.7.4 Parsing embedded clauses
3.7.5 Parsing with deeper structures
3.8 Discourse analysis
3.8.1 Story gestalt and text understanding
3.8.2 Processing stories with scriptural knowledge
3.9 Machine translation
3.10 Conclusion
Chapter Four NeuroTrans design considerations
4.1 Scalability and extensibility
4.2 Transfer or inter lingual
4.3 Hybrid or fully connectionist
4.4 The use of linguistic knowledge
4.5 Translation as a two stage process
4.6 Selection of network models
4.7 Connectionist implementation
4.8 Connectionist representation issues
4.9 Conclusion
Chapter Five A neural lexicon model
5.1 Language data
5.2 Knowledge representation
5.2.1 Symbolic approach
5.2.2 The statistical approach
5.2.3 Connectionist approach
5.2.4 NeuroTrans' input/output representation
5.2.5 NeuroTrans' lexicon representation
5.3 Implementing the neural lexicon
5.3.1 Words in context
5.3.2 Context with weights
5.3.3 Details of algorithm
5.3.4 The Neural Lexicon Builder
5.4 Training
5.4.1 Sample preparation
5.4.2 Training results
5.4.3 Generalization test
5.5 Discussion
5.5.1 Adequacy
5.5.2 Scalability and Extensibility
5.5.3 Efficiency
5.5.4 Weaknesses
Chapter Six Implementing the language model
6.1 Overview
6.2 Design
6.2.1 Redefining the generation problem
6.2.2 Defining jumble activity
6.2.3 Language model structure
6.3 Implementation
6.3.1 Network structure Sampling Training and results
6.3.2 Generalization test
6.4 Discussion
6.4.1 Insufficient data
6.4.2 Information richness
6.4.3 Insufficient contextual information
6.4.4 Distributed language model
Chapter Seven Conclusion
Chapter Eight References
Index