首页 RoboTank An inexpensive robot platform to

RoboTank An inexpensive robot platform to

举报
开通vip

RoboTank An inexpensive robot platform toRoboTank An inexpensive robot platform to RoboTank: An inexpensive robot platform to demonstrate Important USAR Issues. Brenden Keyes Robert Casey 91.458 – Grad Robotics I Computer Science Department University of Massachusetts Lowell Lowell, MA Abstra...

RoboTank An inexpensive robot platform to
RoboTank An inexpensive robot platform to RoboTank: An inexpensive robot platform to demonstrate Important USAR Issues. Brenden Keyes Robert Casey 91.458 – Grad Robotics I Computer Science Department University of Massachusetts Lowell Lowell, MA Abstract We show how to create an inexpensive robotic platform that demonstrates a couple issues that Urban Search and Rescue operators have to deal with such as situational awareness and the robot's perspective. This platform consists of a radio controlled tank, a wireless video system, a distance sensor, and weighted motors which are added to the tank's remote control. By using this robot tank system we hope to educate the user on two issues. First we wanted give users an actual working model to provide a feel of what it is like to teleoperate a robot in hazardous and non-hazardous environments. This will illustrate to the user that this is a very difficult task. Secondly we wanted to show that having sensors, along with the live video, makes it easier to control the robot and leads to better results. Introduction Teleoperating a robot in various environments, such as a disaster area, is a very challenging task. There is a lot of cognitive overhead that the operator must deal with in maintaining a mental model of what has been seen in the search space. Being aware of what is in your surroundings, and what it looks like is called “Situation Awareness”. [1] Teleoperating a robot with just a camera attached to the front of it is not as easy as one would assume. There are many things in the environment that may go unnoticed. It is also very difficult to maintain an accurate mental map of the search space as well as the robot's surroundings. If the operator is unaware of the surroundings, then they potentially could cause damage to the robot, the environment, or victims in the area [1]. As a demonstration of this fact, we have developed this fairly low cost robot platform. Again, this platform consists of a radio controlled tank, a wireless video 1 system, a distance sensor, and weighted motors which are added to the tank's remote control. We want to give users an actual working model to provide a feel of what it is like to teleoperate a robot in different environments without them having to spend thousands of dollars for a robot. We also want to show that having sensors, along with the live video, makes it easier to control the robot and leads to better results. Hardware Used This platform was designed to be a simple, low cost system that is used to show the benefits of improved situational awareness with a teleoperated robot. The robot's base is the HobbyZone M1A1 Abrams Radio Controlled Tank (HBZ2051) [Figure 1]. We chose the tank because it has a small turning radius. It also has appropriate speeds for the task at hand. Also, with a little practice, it is relatively easy to control. Finally, the turret on top of the tank provides an inherently easy way to provide pan/tilt capabilities to the video camera. Figure 1: HobbyZone M1A1 Abrams Radio Controlled Tank The force-feedback system is made up of several components. The first component, the distance sensor, is a Sharp Infrared Distance sensor (GP2D12). The information that this sensor receives is passed via a radio transmitter to a receiver on the controller. The transmitter is the Abacom TXM-418-F, and the receiver is the SILRX-418-F. Figure 2: TXM-418-F and SILRX-418-F (shown to scale) 2 We modified the controller by inserting two weighted motors inside, one on the left side and one on the right side [Figure 3]. This modification makes the controller vibrate when the motors are turned on. So, if the sensor [Figure 4] detects something close, the motors in the controller will vibrate at varying speeds depending on how close the object is. If the sensor is placed looking behind the robot, then the operator could back up, and if the controller vibrates, then the user knows that there is an object behind it, then they should stop moving so they do not cause damage. More than one distance sensor could be added, but we used one as a proof of concept to meet time constraints. Figure 3: Inside of the Modified Tank Controller Figure 4: Infrared Distance Sensor on the front of the tank 3 The wireless video system [Figure 6] consists of a ccd camera that transmits on the 1.2ghz frequency. The reveiver has a standard rca video out, so it can be plugged into most TVs, VCRs, and computer video-in cards. We bought this camera system, item number ca12, from 123securityproductions.com. Figure 6: Wireless Video System 4 Design We cut the tank's cannon down to about half of its original size so that the camera would not protrude past the front of the tank. We then mounted the video camera to the end of the cannon using hot glue. The camera is powered by a 9V battery using the adapter that came with the video system. This battery is mounted to the top of the turret so that it can move in full 360º with the camera [Figure 7]. The video receiver is placed at the TV or monitor that the operator is looking at. Figure 7: Camera mounted to Turret The Infrared sensor, which can reliably sense objects from 4cm to 30cm, returns analog values. The closer an object is, the larger the number returned. Depending upon the value returned by the sensor, we transmit one of three 6-bit digital codes. If the value of the sensor was under 120, meaning nothing is in the way, we send the code “010101”. We send 010101 because, since it contains alternating bits, it helps keep the transmitter and receiver synchronized. If the sensor value was between 120 and 223 then the item was somewhat close and we send the code “001101”, and if it was over then 350, we send “110010”. We use a PIC chip, that is emulating the LOGO chip to provide the logic for the circuit. The Infrared Sensor's signal pin is attached to pin 2 on the PIC chip. We then hooked up the transmitter to the chip attaching the pins as specified in the chip's user manual. The whole circuit is powered by 4 AA batteries connected in serial [Figure 8]. 5 The transmitter transmits at the 418mhz frequency. We did not attach an antenna to the transmitter because we were getting plenty of range (over 100ft) without it. Figure 8: Tank Transmitter Schematic The receiver circuit is attached to the controller, however, we did not figure out a good solution to mount the actual circuit to it, so we made the wires long enough that the circuit could rest on the table next to the TV or monitor. The receiver circuit is constructed in a similar fashion. For the receiver circuit instead of the IR sensor, it has the SN754410NE motor controller chip to allow easy manipulation of the vibration 6 motors in the controller. The receiver reads in the codes sent by the transmitter at the 418mhz frequency. If it receives a 010101 [Figure 10], it does nothing. If it receives the “001101” [Figure 11] a 3.5V charge is sent to the motors to cause a vibration. If it receives a “110010” [Figure 12] then a 5V charge is sent to the motors to cause a stronger vibration. The charge only lasts .1 seconds. This helps to reduce the noticability of false positives. If the transmission gets corrupted, and the receiver reads it as a “110010”, the vibration will only last for .1 seconds which most likely will not be noticed. However, if there really is something there and 110010 is the correct signal, then the receiver will keep reading that value for as long as the transmitter sends it, and the motors will continue to vibrate, this extended amount of time will be noticed by the operator. Figure 9: Controller Receiver Schematic 7 Figure 10: Nothing Detected. Top is receiver, bottom is transmitter. 8 Figure 11: Something is far. Top is reviver, bottom is transmitter. 9 Figure 12: Something is close. Top is receiver, bottom is transmitter. The overall design algorithm for the communication is relatively simple. The receiver reads in 1 bit at a time and inserts that bit into its current 6-bit view window. This window is examined each time a bit is received, and if any complete command is contained within that window the command will execute. Once a command is received and executed, the window is reset to 000000. If no command is found, every bit in the window is then shifted left (thus shifting the first bit off) to make room for the new incoming bit. The bit strings that represent the commands were chosen in such a way that no substring of any command could, in combination with a substring of another command, be misinterpreted as a complete command. This is not to say, however, that the system is completely error free. It is still possible that the receiver can read a distorted value and thus through off the system, but once this bit is shifted out of the window, the system will continue to behave as usual. Operation This unit was used in the lab by us, fellow students, and some professors. Many found it relatively easy to operate once the user got used to the sensitivity of the controller. In these tests, only the robot view and situational awareness could really be tested, because the force-feedback was not yet completed. In these tests the users had intimate knowledge of what the area (hallways and labs) looked like. This made the 10 gaining and keeping situational awareness easier, since the operator has previous experiences in the area. Even despite this however, it was still a good tool to use to show how different things look from the robot's perspective. In a few rare cases the user got disoriented as to where they actually where, most of the time after turning and moving around, they found recognizable landmarks to help localize themselves. If it was an area they were unfamiliar with, this localization most likely would not have taken place as easily. The tank was also used by kids, and adults, at BotFest 2004 held at the University of Massachusetts Lowell. The kids seemed to get used to the controls much faster than the adult users. They were also much more willing to run into obstacles and people in the area. In many cases we heard comments on how different the view was, and how it was not as easy to maneuver as one would think using the video camera. Also, many times the user looked away from the TV screen to try to find where the robot was in the crowd. The robot was less than 50ft from them, but they did not have a good mental picture of where they were or where they had been. This was the main point that we were trying to show with this platform. The force-feedback system was completed roughly 30 minutes before the BotFest exhibition. It worked well in the lab, and worked well for about 15 minutes at the exhibition. However, the kids were a little rough with the prototype and some soldered wires from the controller to the circuit board came apart, so the feedback part did not work for the rest of the 45 minutes of the show. Even when it was working, the kids did not seem to understand why it was vibrating when it was, nor did they care to listen when we tried to explain to them what was happening. They were just excited to be driving the tank around, which is understandable. However, the older users did seem interested in the feedback provided. Also, if the user ever panned the camera, it was very difficult for them to center it again. But this, along with other studies that we have seen [1] show that when moving, most users do not notice that the camera is not straight. This can lead to very dangerous situations when the person is moving forward thinking it is clear, but their camera is really pointed left. This is another time when the force-feedback can really help out. Conclusions The tank definitely succeeded in showing the operator how difficult it is to maneuver around with just that low level view. Many of the users at BotFest quickly lost the mental model of what the area looked like, and had no idea where the robot was. In a search and rescue application, it would be devastating if the operator found a victim, but could not lead the rescue team there because they had no idea where the robot was. This is why more advanced systems have mapping, land marking, etc. [2,3] The force-feedback did not work as well as it should have. If the receiver circuit was mounted to the controller, rather than having the wires dangle, then the wire that came unsoldered most likely would have stayed attached, and we could have saw the feedback system work with more than just one user. From our own tests, the feedback 11 system was very helpful, especially when backing up. Overall, this unit was a good, low cost way to give an idea of what it is like to teleoperate a robot, or vehicle and to show how difficult it is to maneuver when you do not have a direct line of sight to it. It also shows how hard, and mentally taxing, it is to get and maintain a good mental model of your surroundings. Future Work The main thing to do first would be to add more Infrared sensors to it. Possibly make it so that it has a total of 4 Infrared sensors on it: 1 on the front, 1 in the back, and 1 each on the left and right. We could then add more codes that the transmitter sends to specify which sensor is reading, and have the controller vibrate different ways depending. Secondly, a good feature would be to remove the controller that came with the unit, and instead add in a computer joystick. This would make moving forward and backwards much easier than it is with the current controller. Adding in a wireless 802.11 standard could increase the range the tank could travel as well. It would also be nice to add a small portable LCD display to the controller and have that display the image rather then needing to plug it into a television. This platform could potentially be turned into an actual USAR platform, for search spaces that are relatively flat, but are deemed to unstable for rescue workers or search dogs to go, but other more durable systems might be better for this task. 12 References 1.Jill L. Drury, Jean Scholtz, and Holly A. Yanco. “Awareness in Human-Robot Interactions.” In Proceedings of the IEEE Conference on Systems, Man and Cybernetics, Washington, DC, October 2003 2.J. J. Leonard, I. J. Cox, and H. F. Durrant-Whyte. “Dynamic map building for an autonomous mobile robot”. Int. J. Robotics Research, 11(4):286--298, August 1992. Thrun, S. and Burgard, W. and Fox, D. “A Real-Time Algorithm for Mobile Robot Mapping With Applications to Multi-Robot and 3D Mapping”. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Scan Franciso, CA 2000 13
本文档为【RoboTank An inexpensive robot platform to】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_995397
暂无简介~
格式:doc
大小:199KB
软件:Word
页数:0
分类:
上传时间:2018-04-15
浏览量:17