freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

關(guān)于機器人dsp控制的外文翻譯-在線瀏覽

2025-01-06 03:22本頁面
  

【正文】 rial robots that operate in a fixed environment on a small range of stimuli, our robots must operate flexibly under various environmental conditions and for a wide range of tasks. Because we require the system to operate without human control, we must address research issues such as behavior selection and attention. Such autonomy often represents a tradeoff between performance on particular tasks and generality in dealing with a broader range of stimuli. However, we believe that building autonomous systems provides robustness and flexibility that taskspecific systems can never achieve. Requiring our robots to operate autonomously in a noisy, cluttered, trafficfilled workspace alongside human counterparts forces us to build systems that can cope with naturalenvironment plexities. Although these environments are not nearly as hostile as those plaary explorers face, they are also not tailored to the robot. In addition to being safe for human interaction and recognizing and responding to social cues, our robots must be able to learn from human demonstration. The implementation of our robots reflects these research principles. For example, Cog began as a 14degreesoffreedom (DOF) upper torso with one arm and a rudimentary visual system. In this first incarnation, we implemented multimodal behavior systems, such as reaching for a visual target. Now, Cog features two sixDOF arms, a sevenDOF head, three torso joints, and much richer sensory systems. Each eye has one camera with a narrow field of view for highresolution vision and one with a wide field of view for peripheral vision, giving the robot a binocular, variableresolution view of its environment. An inertial system lets the robot coordinate motor responses more reliably. Strain gauges measure the output torque on each arm joint, and potentiometers measure position. Two microphones provide auditory input, and various limit switches, pressure sensors, and thermal sensors provide other proprioceptive inputs. The robot also embodies our principle of safe interaction on two levels. First, we connected the motors on the arms to the joints in series with a torsional spring. In addition to providing 3 gearbox protection and eliminating highfrequency collision vibrations, the spring’s pliance provides a physical measure of safety for people interacting with the arms. Second, a spring law, in series with a lowgain force control loop, causes each joint to behave as if controlled by a lowfrequency spring system (soft springs and large masses). Such control lets the arms move smoothly from posture to posture with a relatively slow mand rate, and lets them deflect out of obstacles’ way instead of dangerously forcing through them, allowing safe and natural interaction. (For discussion of Kismet, another robot optimized for human interaction, see ―Social Constraints on Animate Vision,‖ by Cynthia Breazeal and her colleagues, in this issue.) Interacting socially with humans Because our robots must exist in a human environment, social interaction is an important facet of our research. Building social skills into our robots provides not only a natural means of human–machine interaction but also a mechanism for bootstrapping more plex behavior. Humans serve both as models the robot can emulate and instructors that help shape the robot’s behavior. Our current work focuses on four socialinteraction aspects: an emotional model for regulating social dynamics, shared attention as a means for identifying saliency, acquiring feedback through vocal prosody, and learning through imitation. Regulating social dynamics through an emotional model. One critical ponent for a socially intelligent robot is an emotional model that understands and manipulates its environment. A robot requires two skills to learn from such a model. First is the ability to acquire social input—to understand the relevant clues humans provide about their emotional state that can help it understand any given interaction’s dynamics. Second is the ability to manipulate the environment—to express its own emotional state in such a way that it can affect socialinteraction dynamics. For example, if the robot is observing an instructor demonstrating a task, but the instructor is moving too quickly for the robot to follow, the robot can display a confused expression. The instructor naturally interprets this display as a signal to slow down. In this way, the robot can influence the instruction’s rate and quality. Our current architecture incorporates a motivation model that enpasses these exchange types Identifying saliency through shared attention. Another important requirement for a robot to participate in social situations is to understand the basics of shared attention as expressed by gaze direction, pointing, and other gestures. One 4 difficulty in enabling a machine to learn from an instructor is ensuring the machine and
點擊復(fù)制文檔內(nèi)容
公司管理相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1