freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

關(guān)于機(jī)器人dsp控制的外文翻譯-文庫(kù)吧資料

2024-11-11 03:22本頁(yè)面
  

【正文】 ical designs and software architectures, we also attempt to use our implementations of these models to test and validate the original hypotheses. Just as puter simulations of neural s have been used to explore and refine models from neuroscience, we can use humanoid robots to investigate and validate models from cognitive science and behavioral science. We have used the following four examples of biological models in our research. Development of reaching and grasping. Infants pass through a sequence of stages in learning handeye coordination. We have implemented a system for reaching to a visual target that follows this biological model. Unlike standard kinematic manipulation techniques, this system is pletely selftrained and uses no fixed model of either the robot or the environment. Similar to the progression observed in infants, we first trained Cog to orient visually to an interesting object. The robot moved its eyes to acquire the target and then oriented its head and neck to face the target. We then trained the robot to reach for the target by interpolating between a set of postural primitives that mimic the responses of spinal neurons identified in frogs and rats. After a few hours of unsupervised training, the robot executed an effective reach to the visual target. Several interesting outes resulted from this implementation. From a puter science perspective, the twostep training process was putationally simpler. Rather than attempting to map the visualstimulus location’s two dimensions to the nine DOF necessary to orient and reach for an object, the training focused on learning two simpler mappings that could be chained together to produce the desired behavior. Furthermore, Cog learned the second mapping (between eye position and the postural primitives) without supervision. This was possible because the mapping between stimulus location and eye position provided a reliable error signal. From a biological standpoint, this implementation uncovered a limitation in the postural primitive theory. 6 Although the model described how to interpolate between postures in the initial workspace, no mechanism for extrapolating to postures outside the initial workspace existed.. Rhythmic movements. Kiyotoshi Matsuoka describes a model of spinal cord neurons that produce rhythmic motion. We have implemented this model to generate repetitive arm motions, such as turning a crank. Two simulated neurons with mutually inhibitory connections drive each arm joint. The oscillators take proprioceptive input from the joint and continuously modulate the equilibrium point of that joint’s virtual spring. The interaction of the oscillator dynamics at each joint and the arm’s physical dynamics determines the overall arm motion. This implementation validated Matsuoka’s model on various realworld tasks and provided some engineering benefits. First, the oscillators require no kinematic model of the arm or dynamic model of the system. No a priori knowledge was required about either the arm or the environment. Second, the oscillators were able to tune to a wide task range, such as turning a crank, playing with a Slinky, sawing a wood block, and swinging a pendulum, all without any change in the control system configuration. Third, the system was extremely tolerant to perturbation. Not only could we stop and start it with a very short transient period (usually less than one cycle), but we could also attach large masses to the arm and the system would quickly pensate for the change. Finally, the input to the oscillators could e from other modalities. One example was using an auditory input that let the robot drum along with a human drummer. Visual search and attention. We have implemented Jeremy Wolfe’s model of human visual search and attention, bining lowlevel feature detectors for visual motion, innate perceptual classifiers (such as face detectors), color saliency, and depth segmentation with a motivational and behavioral model. This attention system lets the robot selectively direct putational resources and exploratory behaviors toward objects in the environment that have inherent or contextual saliency. This implementation has let us demonstrate preferential looking based both on topdown task constraints and opportunistic use of lowlevel features. For example, if the robot is searching for ocial contact, the motivation system increases the weight of the facedetector feature. This produces a preference for looking at faces. However, if a very interesting nonface object appeared, the object’s lowlevel properties would be sufficient to attract the robot’s attention. We are 7 incorporating saliency cues based on the model’s focus of attention into this attention model. We were also able to devise a simple mechanism for incorporating habituation effects into Wolfe’s model. By treating timedecayed Gaussian fields as an additional lowl
點(diǎn)擊復(fù)制文檔內(nèi)容
公司管理相關(guān)推薦
文庫(kù)吧 www.dybbs8.com
備案圖鄂ICP備17016276號(hào)-1