Anda belum login :: 24 Nov 2024 10:30 WIB
Detail
ArtikelLearning To Control In Operational Space  
Oleh: [s.n]
Jenis: Article from Journal - ilmiah internasional
Dalam koleksi: The International Journal of Robotics Research vol. 27 no. 2 (Feb. 2008), page 197-212.
Topik: operational space control; robot learning; reinforcement learning; reward-weighted regression
Fulltext: 197.pdf (1.18MB)
Isi artikelOne of the most general frameworks for phrasing control problems for complex, redundant robots is operational-space control. However, while this framework is of essential importance for robotics and well understood from an analytical point of view, it can be prohibitively hard to achieve accurate control in the face of modeling errors, which are inevitable in complex robots (e.g. humanoid robots). In this paper, we suggest a learning approach for operational-space control as a direct inverse model learning problem. A first important insight for this paper is that a physically correct solution to the inverse problem with redundant degrees of freedom does exist when learning of the inverse map is performed in a suitable piecewise linear way. The second crucial component of our work is based on the insight that many operational-space controllers can be understood in terms of a constrained optimal control problem. The cost function associated with this optimal control problem allows us to formulate a learning algorithm that automatically synthesizes a globally consistent desired resolution of redundancy while learning the operational-space controller. From the machine learning point of view, this learning problem corresponds to a reinforcement learning problem that maximizes an immediate reward. We employ an expectation-maximization policy search algorithm in order to solve this problem. Evaluations on a three degrees-of-freedom robot arm are used to illustrate the suggested approach. The application to a physically realistic simulator of the anthropomorphic SARCOS Master arm demonstrates feasibility for complex high degree-of-freedom robots. We also show that the proposedmethodworks in the setting of learning resolvedmotion rate control on a real, physical Mitsubishi PA-10 medical robotics arm.
Opini AndaKlik untuk menuliskan opini Anda tentang koleksi ini!

Kembali
design
 
Process time: 0.015625 second(s)