Executive Summary
This design document summary outlines the project's purpose, goals, and specifications It provides a high-level overview of our research and design process, followed by an explanation of our testing procedure The conclusion offers final insights into the effectiveness of our system.
Our objective in creating this robot was to explore the limits of our knowledge and skills in robotics Although it has numerous practical applications, our primary aim was to demonstrate that we could develop it at a significantly lower cost compared to similar systems on the market The design of our robot drew inspiration from the Dragon Runner, a project developed by Carnegie Mellon University.
Our advanced robotic system showcases impressive capabilities, including remote control via an Apple iPhone, allowing users to operate it within a designated range It delivers real-time audio and video feedback through an on-board camera and microphone Notably, the system is designed to endure significant trauma and stress, surviving impacts such as being thrown through windows, dropped from three stories, and ejected from a moving vehicle Additionally, we have incorporated features like GPS tracking and limited ballistics resistance, enhancing its overall functionality and resilience.
Before initiating design and construction, we conducted extensive research on all project aspects to optimize funding efficiency We examined various robot case studies and aligned our research and design processes accordingly Each team member was assigned a specific developmental task, such as programming, microelectronics, or drive train/chassis Over several months, we delved into key research areas, including Apple software development, wireless communication standards, and a thorough analysis of raw materials and their durability.
The final design of our robotic system meets all initial requirements, featuring a durable shell made from high-impact polymers and a well-structured chassis with an effective suspension system for optimal dampening of inertial forces Control and video display are managed through a custom hybrid iPhone interface, while power is provided by two rechargeable batteries, enabling continuous operation for up to 30 minutes.
We will assess our robot's performance through various system tests, including a communication test to determine the maximum operational distance before unresponsiveness occurs Additionally, we will perform impact tests using G-force sensors to measure its durability against known failure thresholds from specific drop heights Our evaluation will also cover the robot's speed, maneuverability, cooling efficiency, and overall user-friendliness.
Our final system serves as a prime illustration of modern human-robot interaction within the military sector Looking ahead, we aim to enhance our system by incorporating additional features that were previously limited by budget and time constraints Ultimately, the project was successful, resulting in a fully operational robotic system and providing us with invaluable experience and insights.
Introduction
“The robot revolution is upon us.”
-PW Singer, author of Wired for War: The Robotics Revolution and Conflict in the 21 st Century, in discussing the current state of robotics in the armed forces.
Robots are increasingly integrated into military operations worldwide, with the U.S military evolving from a reliance on a few aerial drones in Iraq to deploying approximately 5,300 drones and over 12,000 unmanned ground robots today Industry experts predict that in the next 25 years, tens of thousands of unmanned systems will be operational, boasting capabilities that exceed current technology by over one billion times This growth in unmanned systems has transformed modern warfare, highlighting that military robotics often utilize off-the-shelf technology rather than requiring extensive manufacturing processes; for instance, a basic UAV drone can be constructed for under one thousand dollars with the appropriate equipment.
Inspired by a "Do-It-Yourself" approach to robotics, our senior design project focused on developing a robot for military applications, leveraging our group's extensive experience in the defense industry With expertise in electrical and computer engineering, alongside foundational knowledge in mechanical engineering, we were confident in our ability to build a functional robot Additionally, our familiarity with electrical components and programming, combined with insights gained from a robotics course taken by several team members, further fueled our enthusiasm for the project.
In our robotics class, we discovered our inspiration for a military robotic platform while watching a compilation of military robotic systems The Dragon Runner, developed by Carnegie Mellon University for the Marines, particularly stood out to us This innovative system functions as a sentry and monitoring device, and has been utilized for disposing of Improvised Explosive Devices (IEDs) Its resilience to physical abuse is remarkable, as demonstrated in a video where the robot was thrown through windows, dropped from balconies, and even tossed from a moving vehicle, yet it continued to operate as if unaffected.
We were both amused and impressed by the engineering marvel of the Dragon Runner, which retails for $32,000 despite its compact size of only two feet long and one foot wide This revelation sparked our determination to replicate Carnegie Mellon's innovative system while significantly reducing costs.
(Figure printed with permission by Wikipedia GNU)
Current Research
The law enforcement and military robotics sector is rapidly expanding, driven by the goal of preserving human lives by deploying robots in hazardous environments instead of sending personnel A U.S Navy petty officer encapsulated this sentiment, stating, “When a robot dies, you don’t have to write a letter to his mother.” The aspiration is to eventually have more robotic soldiers than human counterparts, thereby enhancing safety in combat situations Numerous innovative robotics projects are currently underway, aimed at realizing this vision.
The Dragon Runner, a leading sentry robot developed at Carnegie Mellon University and funded by the Marine Corps Warfighting Lab, weighs only 9 lbs and is designed for urban combat, making it easily portable for soldiers Currently deployed in Iraq, it assists troops in various situations While our robot shares several features with the Dragon Runner, it is limited by funding and lacks some advanced capabilities, such as a top speed of 45 miles per hour and thermal imaging Notably, the cost to build a single Dragon Runner system exceeds $32,000, whereas our robot is projected to be under $1,000.
Figure 2 - Top View of Dragon Runner Unmanned Vehicle
(Image reproduced from the Public Domain through the Marine Corp.)
The Battlefield Extraction-Assist Robot (B.E.A.R) is a remarkable robotic innovation designed to carry up to 500 lbs, making it capable of transporting injured soldiers along with their gear Its dynamic movement capabilities allow it to navigate challenging environments, including battlefields, mine shafts, toxic areas, and collapsed structures B.E.A.R is engineered to enhance safety by autonomously retrieving downed soldiers and delivering them to secure locations, representing a significant advancement in robotic technology akin to the evolution of the Dragon Runner, with a focus on saving human lives.
Finally, we look at a robot that we have been researching in our robotics course,
The "Big Dog" robot, depicted in Figure 3, is an impressive quadruped capable of navigating diverse terrains, including snow, debris, and sand Equipped with a sophisticated balancing mechanism, it can quickly regain stability if kicked or pushed This remarkable technology showcases the ability to mimic biological movements in a mechanical design, creating an intriguing yet somewhat unsettling experience.
Figure 3 - Big Dog Robotic Quadruped Built by Boston Dynamics.
(Figure printed with permission by Wikipedia GNU)
Goals and Objectives
The objective of this project was to develop a highly durable, user-controlled robot tailored for military and law enforcement use, aiming to significantly reduce costs While existing robots are priced around $30,000, our target was to create a functional prototype for under $1,000 To achieve this, we divided our primary objectives into two key categories: hardware and software.
We are developing a long-distance controlled robot utilizing wireless radio communication and socket connections to facilitate data exchange between the robot and the central control unit To enhance remote monitoring, we will integrate a camera with an audio/video transmitter, along with a global positioning unit to continuously track the robot's location, ensuring safety in case of communication failures Our design features two separate power supplies: one dedicated to the motors and the other for the microcontroller, camera, wireless communication, GPS, and fans, preventing high motor currents from disrupting sensitive components Rechargeable batteries with accessible charging plugs will be installed for convenience The robot's locomotion will be managed by two high-powered DC geared motors driving four all-terrain wheels for optimal traction A motor controller will regulate the motors through pulse width modulation, while the robot's durable enclosure, made from fiberglass, carbon fiber, and truck bed liner, will withstand impacts The aluminum frame will securely house all electronic components and peripherals, supported by a suspension system to absorb collision forces.
The robot will continuously stream audio and video to the user via a custom software application, which will display the current GPS location using Google Maps or a similar service Control of the robot will primarily be managed through a user-friendly computer interface, while an Apple iPhone will serve as the main controller, allowing seamless connectivity The system will operate wirelessly on Windows XP or Linux, enabling the iPhone app to maneuver the robot and access the live video feed.
To effectively manage our project, we will utilize a Gantt chart to establish clear goals within our time constraints Our process begins with thorough research on each section of the robot, followed by the procurement of necessary parts based on our findings Once the parts arrive, we will transition into the design phase, focusing first on the electronic components, then the drive train, and finally the robot's exterior Next, we will develop the software interface for both the gateway and iPhone application After rigorous testing to ensure compliance with our requirements, we will conduct hardware and software optimization to enhance code efficiency and secure all connections.
Specifications
Following our design research, we established key specifications for our robot's capabilities and physical properties This article outlines the primary specifications, detailing both the acceptable (low) and enhanced (high) possibilities for each While there are additional specifications, we have focused on the most essential ones for our project.
Battery Life-Low Usage 1 4 Hours
(Table created by our group)
Requirements
In this section, we outline the essential requirements for our robot, ensuring that all necessary features have been thoroughly tested While we have additional tests planned, the highlighted requirements are our primary focus Detailed test procedures for these critical requirements are documented in Section 4 Additionally, Table 2 presents each requirement alongside our testing methods, providing a clear overview of our evaluation process.
In the final stages of our project, we will utilize these requirements to rigorously test our robot, ensuring that we meet all essential criteria for delivering a fully functional system.
No Requirement How To Test Requirement
1 ARMORD must have a minimum communication range of approximately
We will drive the robot until communication failure and then measure distance.
15 MPH reaches the final mark.
3 ARMORD must be able to withstand a 2-story drop and equivalent impact and still be able to function normally.
After the robot is dropped we will continue to use and watch performance of the robot.
4 ARMORD must be able to provide video/audio feedback via the onboard camera.
We will check the live feed of our robot as it drives.
5 ARMORD must have a battery life of 1/2 hour while mobile, 3 hours while operating in standby.
We will time the robot when the robot is running motors until it dies and then we will time it when it is stationary.
6 ARMORD must be able to navigate multiple types of terrain specifically pavement, grass and sand.
We will drive the robot over different terrains to ensure that the robot can move efficiently.
7 ARMORD must be able to be carried easily in a book bag by an average size human.
We will try and store in normal book bags that we wear normally on campus.
8 ARMORD must weigh less than 15 lbs.
We will put the robot on a scale and measure it’s weight.
9 ARMORD must be able to reverse motor
We will start the robot in one direction and immediately switch directions within 5 second. the direction of the motors and watch what happens.
10 ARMORD must have zero degree turn radius.
We will make one motor drive one way and make one motor go the opposite way and watch what happens
11 ARMORD’s video stream must have less than 5 seconds delay between robot and user.
We will put a clock in front of the robot and one in front of us both with the same time and check the difference.
12 ARMORD’s must have user interface that is simple to use.
We will get someone completely who has not touched our project to interact with the user interface and then fill out a form explaining their experience.
13 ARMORD’s must have an accurate way of being tracked via the GPS.
We will set the location of our robot in many different locations throughout Orlando and checking the corresponding GPS location.
14 ARMORD must have minimum packet loss.
We will set 10 different sets of data multiple times and check the output to make sure we are receiving all packets.
15 ARMORD must maintain a temperature low enough that no electronic equipment will fail.
We will set up thermal readers on the inside of the device and monitor while the robot is running.
16 ARMORD must be able to continue movement if it is flipped.
We will test this by driving it over and object and letting it drive on the opposite side.
17 ARMORD must be able to work in areas that are surrounded by 2.4GHz.
We will test our robot at UCF, ensuring it operates without interference The cold start time is under one minute, and after running it for a few hours, we will time its activation to confirm it meets the required time limits.
(Table created by our group)
Risks
Every project inherently carries risks, and as full-time students balancing demanding internships, our team faces unique challenges With only three members, we have a heavier design workload compared to larger groups, fully aware of the commitment required when we chose this project Additionally, none of us has prior experience building an entire robot, making this a significant learning opportunity We also face the uncontrollable risk of illness, which could delay our progress by one to two weeks; in such cases, remaining team members will need to step up and support each other to keep the project on track.
Team Management
From the outset of our project, we agreed to share leadership equally, taking collective responsibility for its organization and completion We understood the necessary tasks to finalize the project and its documentation, so we scheduled weekly meetings to discuss updates and ensure alignment To manage our documentation effectively, we implemented a configuration management system where one team member maintained the main document, while others submitted their updates to avoid accidental deletions This approach fostered frequent communication among team members, enhancing collaboration throughout the project.
To effectively manage the project, we divided it into smaller, manageable segments tailored to each individual's expertise and interests This approach not only enhanced enthusiasm among team members but also contributed to a superior system design and overall project outcome Figures 4 and 5 provide detailed breakdowns of the electronic and chassis construction, as well as the software and hardware allocation, highlighting our strategic division of responsibilities.
Figure 4 - Hardware Breakdown of Project
(Figure created by our group)
Our project was systematically divided into three key components: software, electronics, and body design With a team of three members, each possessing expertise in one of these areas, the allocation of tasks was straightforward and efficient.
(Figure created by our group)
Research
This section of our paper goes into detail of each of the subsections of our robot.
We utilized a variety of resources, such as the Internet, books, research papers, and technical journals, ensuring to obtain permission before using any information or data Our sources are detailed in the works cited section in Appendix A, while the permissions obtained are listed in Appendix B.
Microcontroller
The microchip serves as the brain of our robot, managing all its functionalities by continuously sending and receiving data It processes incoming data from wireless communications and efficiently parses it for distribution to various electronic components Additionally, the microcontroller gathers data from components like the GPS unit, ensuring seamless operation Figure 6 provides a visual overview of the microchip's role in the robot's system.
Figure 6 - Microcontroller in our Robot
The PIC24 is a 16-bit microcontroller that offers enhanced functionality and performance compared to the PIC18 Our final decision will be based on an analysis of instruction sets, data space, and code space After evaluating various chip packaging options, we have chosen Plastic Dual Inline Packaging (PDIP) for its accessibility to pins on a breadboard and ease of soldering during the final PCB assembly Table 3 below presents several PIC chips that we have researched.
Programming these chips is straightforward with Microchip's MPLAB IDE, which enables C++ programming and efficiently converts it into the microchip's machine language This feature is particularly advantageous, as writing assembly code can be complex and time-consuming for those without extensive experience.
PIC18F2553 32KB 12MIPS 2048 256 2 to 5.5 Volts
PIC18F2680 64KB 10MIPS 3328 1024 2 to 5.5 Volts
PIC18F4520 32KB 10MIPS 1536 256 2 to 5.5 Volts
PIC18F4550 32KB 12MIPS 2048 256 2 to 5.5 Volts
Table 3 - Possible PIC Microchips (Table created by our group)
The selected microchips share essential features such as low operating voltage and CPU speed, but differ in the number of USART (Universal Synchronous Asynchronous Receiver/Transmitter) modules, with the top five having one and the others having two The choice of microchip depends on our wireless communication and GPS unit requirements Insights from various forums suggest that opting for a microchip with two USART modules is advantageous, as it facilitates simultaneous input and output operations The USART plays a crucial role in transmitting our code structures as individual bits in a sequential order, which can then be decoded upon reaching their destination, allowing for the reassembly of bits into complete bytes for effective data handling.
We are considering purchasing Atmel AVR microchips, which feature a modified Harvard architecture and are 8-bit single chip microcontrollers Our focus will be on the DIP modules of AVR chips to determine their compatibility with breadboards Both AVR and PIC microchips share similar pricing and packaging types Additionally, we have compiled a table showcasing the AVR chips under review, all of which are equipped with two or three USART communication terminals.
(Table created by our group)
The AVR microcontrollers offer significantly more program memory compared to PIC microchips, making them a superior choice for memory-intensive applications Additionally, a standout feature of AVR chips is their provision of multiple USART ports, enhancing their versatility for various communication needs.
USART ports on microchips are essential for data communication, as they facilitate the transmission of bytes by sending individual bits sequentially Upon reaching the destination, the USART transmits not only the necessary bits but also includes a parity bit for error checking and a stop bit to indicate the end of communication.
Motor Controllers
To effectively operate a vehicle with motors, utilizing a motor controller is essential There are two primary types of motor controllers: Adjustable Speed Drive (ASD) and Variable Speed Drive (VSD) For our project, we will opt for ASD due to its benefits, including acceleration control, smoother operation, precise positioning, and torque management The fundamental mechanism of motor control involves an H-Bridge, which consists of four PNP BJTs or P-Channel MOSFETs The H-Bridge is crucial as it enables the motor to brake by shorting its terminals, facilitating efficient control of the motor's functions.
(Figure printed with permission by Wikipedia GNU)
The motor controller operates by utilizing MOSFET/BJT devices as gates to control the direction of current flow to the motor (M) When gates 1 and 4 are closed while gates 2 and 3 are open, the motor receives a positive current, causing it to rotate clockwise, indicated by the red path Conversely, when gates 2 and 3 are closed and gates 1 and 4 are open, the motor experiences a negative current, resulting in a counterclockwise rotation The switching of current direction through the motor is achieved by manipulating the states of the MOSFET/BJT gates.
We opted to purchase a motor controller instead of building one from scratch to save time Our search focused on finding a motor controller capable of handling sufficient amperage for two motors, streamlining our overall system efficiency.
Motors
To effectively move our robot, we recognized the necessity of a DC geared motor based on our previous experiences A geared motor allows us to achieve adequate speed while maintaining the torque required to navigate rough terrain The gear ratio, which is the comparison of teeth on the input gear to those on the output gear, plays a crucial role in selecting the appropriate DC motor Generally, a higher gear ratio yields more torque, while a lower ratio increases speed Additionally, it was essential to select a motor that maximizes efficiency within our budget constraints.
Research indicates that typical DC motors operate at an efficiency range of 40% to 75% However, incorporating gear ratios can further decrease this efficiency due to energy loss at each gear contact point For instance, a DC motor functioning at 50% efficiency paired with a 100:1 gearbox would yield an effective efficiency of approximately 33%.
Table 5 - Gear Box Efficiency Versus Gear Ratio
(Table created by our group)
Our research indicates that you can estimate the speed, in miles per hour, achieved by a motor by considering the wheel diameter and the motor's rotations per minute This relationship can be expressed through a specific equation.
Speed (mph) = (Diameter of wheel (in) x x rpm of motor / 60) x 0568π
When selecting motors for robots, supply voltage is a crucial consideration, with common options being 6, 12, and 24 volts Although we will delve deeper into motor power in the power supply section, it's important to highlight that higher supply voltages enable increased motor speeds, albeit at the expense of more costly battery packs and power supply equipment.
In the realm of DC motors, the brushless DC motor (BLDC) has gained popularity among RC enthusiasts due to its distinct advantages over traditional brushed DC motors Unlike conventional motors that rely on brushes and a commutator to create an electrical circuit for rotation, BLDC motors utilize stationary electromagnets and rotating permanent magnets, resulting in a more efficient design The power distribution in a BLDC is managed electronically, eliminating the wear and maintenance issues associated with brushes.
Brushless DC (BLDC) motors offer significant advantages over traditional brushed DC motors, including higher efficiency, reduced electromagnetic interference, and a longer lifespan due to the absence of brush erosion They also provide increased power output and eliminate ionizing sparks from the commutator Additionally, BLDC motors do not require internal airflow for cooling, as their electromagnets are attached to the motor casing, allowing for complete sealing against environmental factors However, the main drawback of BLDC technology is its higher cost, primarily due to the need for complex electronic speed controllers and the current lack of widespread commercial popularity, which leads to hand-wound production methods Consequently, BLDC motors can be up to three times more expensive than traditional brushed DC motors, with ESC costs adding significantly to the overall price.
Figure 8 - Burshed DC Motor (left) and Brushelss DC Motor (right)
(Figure printed with permission by Wikipedia GNU)
Power Management System
Our project required various electronic components, each needing different voltage levels To achieve these specific voltages, we decided to implement multiple voltage regulators powered by batteries Given our limited knowledge of this technology, we conducted research on how voltage dividers operate to better understand our approach.
Understanding the theory behind voltage regulator circuits is straightforward We faced the choice of building our own regulators or buying pre-made solutions To enhance the design elements of our project, we opted to construct the regulators ourselves.
Our research indicates that constructing a voltage regulation system requires just two electrolytic capacitors, PCB boards, and a voltage regulator The capacitors play a crucial role in filtering ripple from the voltage regulator and balancing the load to maintain a stable output voltage Additionally, various voltage divider transformers are necessary to achieve different constant voltage outputs The circuit layout for our voltage regulation implementation is depicted in Figure 9.
Figure 9 - Voltage Regulator Design for the ARMORD System
(Figure created by our group)
Wireless Communication
To establish wireless communication between the host computer or iPhone and our robot, we initially considered using the 802.11b/g/n standard for its impressive speed and range However, concerns about potential interference from numerous nearby 802.11 devices led us to explore alternative options After researching various forums and websites, we discovered that many users successfully utilize Zigbee or EnOcean for reliable wireless communication in similar applications.
After evaluating Zigbee and EnOcean technologies, we found key differences: Zigbee operates on a 2.4GHz band, while EnOcean utilizes 315MHz/868MHz frequencies Notably, EnOcean is battery-free, harnessing energy from heat emitted by surrounding electronics EnOcean claims that the 2.4GHz band is prone to interference, though we believe this won't affect our application Despite Zigbee's two drawbacks compared to EnOcean, its widespread use means abundant resources, projects, and source code are available online, which is advantageous for us as newcomers to Zigbee A diagram illustrating our intended use of the Zigbee module is provided in Figure 10.
Figure 10 - Zigbee in our Robot
(Figure created by our group)
Digi is a reputable distributor of Zigbee modules, offering two primary types: the XBee and XBee Pro These modules are ideal for our project due to their compact design, extended battery life, high data reliability, and ease of replacement in case of failure A comparison of the XBee and XBee Pro can be found in Table 6.
RF Data Rate 250 Kbps 250 Kbps
Power Down Current < l uA < 10 uA
Table 6 - XBee Comparison Chart(Table created by our group)
The XBee Pro offers approximately three times the communication distance of the standard XBee, although it requires higher voltage and current Despite its increased power demands, the XBee Pro is advantageous due to its extended range, making it a suitable choice for applications where distance is crucial Additionally, both modules are designed for efficient power consumption, allowing them to operate for extended periods even during continuous transmission and reception.
Chassis
The construction of the robot's chassis is a critical and challenging task, as it serves as the primary protection for our valuable electronics against extreme physical forces This section covers comprehensive research on chassis construction, focusing on the drive train and the secure mounting of electrical components.
Robot Platform
Initially, we believed a custom-built chassis was essential for our project's durability requirements, but we also explored various robotic platform options The most viable choice was the Standard ATR all-terrain robot kit from SuperDroid Robotics Inc, notable for its dual-sided design and robust 1/8” aluminum construction However, we identified drawbacks, including its high price point and limited mounting area for electronics, which were intended for exterior placement Ultimately, the additional costs, combined with the base price of $600, made this option impractical for our needs.
(Figure printed with permission by SuperDroid)
Figure 12 - Intended Mounting of Peripherals to the ATR (Figure printed with permission by SuperDroid)
In our quest for an innovative robotic platform, we turned to an unexpected source: our childhood memories In the early 1990s, Kenner, a well-known toy company, launched a highly successful remote-controlled car called Ricochet, which captivated the imaginations of children everywhere.
We discovered a platform that showcased the desired “dual-sided” design, despite its outdated electronics and flimsy plastic casing This design closely resembled our vision, allowing us to conduct preliminary testing with minor modifications After researching online, we found multiple units available for under $40.
(Figure was created by our group)
Based on our findings, we decided to proceed with our original plan to build a custom chassis tailored to our requirements Additionally, we agreed to purchase at least one Ricochet RC car to assist in testing and to potentially serve as a reference for our final design.
Drive Train
The choice of locomotion was vital for our project, as we required a robust system capable of navigating harsh environments while enduring potential impacts from being thrown or dropped We anticipated that the wheels and axles would endure significant stress, possibly even more than the robot's enclosure Therefore, we focused on selecting the most effective method for our vehicle's movement, ultimately considering two primary options: tank treads and wheels.
After researching drive trains, tank treads emerged as the optimal choice for providing robots with sufficient torque, similar to their use in military tanks that require rapid movement of heavy equipment on the battlefield These treads consist of a large outer shell featuring tooth-like indents that grip the ground, enhancing the vehicle's propulsion However, a significant drawback of tank treads is that they skid during turns, leading to energy loss.
Inspired by the Dragon Runner robot, we chose all-terrain wheels for our robot to ensure reliable movement, avoiding the risk of tank treads slipping off during high-impact situations like falls or collisions Our focus shifted to finding durable yet cost-effective all-terrain wheels and tires that would provide the best balance between longevity and affordability.
In selecting the optimal method for connecting motors to wheels, we evaluated both belt-driven systems and direct motor coupling Our research indicated that similar robotic platforms favored direct coupling due to its robust connection, while belt drives risk loosening upon impact However, belt-driven systems offer the advantage of flexibility and resilience against vibrations Figure 14 illustrates a typical motor and wheel coupling in a comparable robotic system featuring three sets of wheels, demonstrating our planned approach to motor placement.
Figure 14 - Common Drive System in a Typical All-Terrain Robotic Platform
(Figure printed with permission by Wikipedia GNU)
Wheels
After extensive research, we discovered that many suppliers offer tires and wheels separately, increasing overall costs Additionally, we found that numerous off-road robot wheels are essentially RC car wheels marketed by robot retailers This insight allowed us to broaden our search for wheels, exploring various retailers that provide tires suitable for diverse terrains Several of these styles are illustrated in Figure.
When selecting the right wheel for our robot, size was a crucial factor Our research revealed that most RC wheels range from 3 to 5 inches in diameter We opted for a wheel with a minimum diameter of 5 inches to maximize space for our robot's dual-sided design, allowing equal tire availability on both the top and bottom By allocating at least 1 inch of tire height on each side, we ensured approximately 3 inches for the body and frame Additionally, we found that wheel circumference, which is directly related to wheel diameter, affects both speed and torque; smaller wheels provide greater torque, while larger wheels enhance speed Therefore, finding the ideal wheel diameter was essential for optimizing our robot's performance.
Figure 15 - Common Types of All-Terrain Tire Treads(Figure printed with permissions by Superdroid)
Raw Materials
When choosing raw materials for the chassis and its components, we prioritized durability, cost, and weight Although the robot design we referenced uses proprietary and military-grade materials beyond our budget, we believe we can achieve comparable performance using standard construction materials.
To ensure our vehicle's outer enclosure withstands significant abuse from the elements, we sought a highly durable and cost-effective material This approach allows us to purchase in bulk, accommodating potential needs for additional enclosures due to breakage or design modifications We evaluated three materials—fiberglass, carbon fiber, and aluminum—considering their mechanical properties and costs, as summarized in Table 7 This table outlines two key measures of tensile strength: ultimate strength, which indicates the maximum stress a material can endure before failure, and yield strength, the stress level at which a material begins to deform permanently It's important to note that fiberglass and carbon fiber lack yield strength due to their chemical brittleness, meaning their ultimate strength effectively represents their breaking point.
Table 7 - Comparison of Raw Materials for Outer Enclosure
(Table created by our group)
While carbon fiber offers high tensile strength and light weight, it was ultimately rejected due to its high cost and poor shock resistance, easily shattering upon impact Aluminum emerged as a more favorable enclosure material, demonstrating good tensile strength and the ability to return to its original shape However, challenges arose in working with aluminum, as it requires specialized equipment for cutting and welding, which our team lacked This necessitated outsourcing welding tasks, leading to increased costs and potential delays Despite these challenges, aluminum remains a viable option for simpler components of our robots.
After researching various materials for our robot enclosure, we discovered that fiberglass, suggested by a team member with auto industry experience, offers a lightweight solution similar to carbon fiber Despite its poor shock strength, fiberglass can be layered to enhance durability The construction process involves mixing polyester or epoxy resin with a hardener and applying it over fiberglass mat, resulting in glass-reinforced plastic (GRP), which provides strength Additionally, fiberglass mat is cost-effective, priced as low as 50 cents per foot, allowing us to save on materials and labor since we have a knowledgeable team member skilled in fiberglass construction Notably, fiberglass is also permeable to radio frequencies with low signal attenuation, making it a preferred choice in the telecommunications industry for concealing antennas Ultimately, our research confirmed that fiberglass is the optimal material for our robot enclosure.
After selecting glass-reinforced plastic for our robot's construction, we sought ways to enhance the enclosure's durability, leading us to Polyurea, commonly known as truck-bed liner This chemical compound is renowned for its exceptional durability and resistance to environmental factors Our research revealed military applications of Polyurea for vehicle armoring, and a popular Discovery Channel show, Smash Labs, demonstrated its resilience against various ballistics and explosives, with no damage reported According to specification sheets from Rhino Linings, a leading Polyurea manufacturer, the material boasts a tensile strength of 41 MPa, making it a robust coating Additionally, Polyurea's rapid curing time allows for customizable thickness, further enhancing its strength.
Figure 16 - Damage Sustained to Polyurea from Large Grade Explosives
(Figure printed with permission by Wikipedia GNU)
GPS System
We are implementing a GPS system to continuously track our unit's location, ensuring we can locate it if lost or simply determine its current position The GPS will be connected to a UART interface on our PIC microcontroller, and we have identified several products that facilitate this integration Our GPS module will provide valuable data, including speed and 3D positioning information.
(Figure printed with permission by SparkFun)
We began our exploration with Sparkfun’s Copernicus GPS dip module, which is ideal for our project as it can easily be connected to a breadboard and requires only about 20mA of power However, it is important to note that a separate antenna must be purchased to use this GPS module effectively.
The FalcomA03 GPS breakout board is an excellent choice for those seeking a GPS unit with a prebuilt antenna This device operates with 3.3V level signals, making it compatible with microchips Notable features include easily accessible pins, an onboard reset button, battery backup power, and an LED indicator for satellite lock Technically, it consumes only 100mA of power and reduces to 20mA when a satellite lock is achieved A visual reference of this unit can be found in Figure 17.
Global Positioning System (GPS) receivers operate by passively receiving signals from satellites, which do not transmit any information themselves The accuracy of GPS functionality relies on precise time references embedded in the satellite messages, each containing location and time data These messages are continuously broadcasted using time codes derived from atomic clocks By measuring the time it takes for signals to reach the receiver, the GPS can calculate the distance to multiple satellites When the receiver obtains distance measurements from four satellites, it can accurately determine its position within a four-dimensional coordinate system, incorporating both space and time.
Power Supply
We are planning to utilize two separate batteries for our robotic system: one dedicated to the motor circuit and another for the electronics and peripherals In selecting these power supplies, we carefully considered the specific requirements of the components they would support Our research revealed that the high-power motors we intend to use typically require a 24V power supply For the peripheral circuit, we will implement a 12V source to provide flexibility and accommodate future expansion, as the maximum voltage draw in this circuit will be from a 9V camera.
When selecting a battery for our motors, it's crucial to consider its capacity in mA/h Our research indicates that the motors we are interested in consume approximately 900 mA Ideally, a battery rated at 1800 mA/h could run two motors in parallel for one hour However, due to non-ideal conditions, it's advisable to choose a battery with a capacity between 2000 and 4000 mA/h to ensure reliable operation for an hour, allowing for necessary headroom in case of emergencies.
Choosing the right battery for RC and robotic applications is crucial, with Nickel Metal Hydride (NiMH) and Nickel Cadmium (NiCd) being the two most prevalent options Key differences between these batteries include their capacity, susceptibility to memory effects, and overall environmental impact For a detailed overview, Table 8 highlights the advantages of each battery type, while Table 9 offers a numerical comparison to aid in decision-making.
NiMH Batteries’ Advantage Over NiCd Batteries
PERFORMANCE NiMH batteries will greatly out-perform standard NiCd batteries in high-drain applications.
CAPACITY The amount of energy stored by the battery NiMH has more than twice the capacity of standard NiCd NiMH batteries have much longer runs times
NiMH batteries can be charged at any time without impacting their lifespan, unlike NiCd batteries, which require complete discharge before recharging to ensure optimal performance Importantly, NiMH batteries do not experience memory effect, allowing for more flexible charging options.
NiMH batteries are an environmentally friendly alternative as they do not contain hazardous cadmium, making them safer for the environment Additionally, both NiMH and NiCd batteries produce virtually the same voltage, ensuring consistent power output.
Table 8 - Advantages of NiMH vs NiCd Batteries
(Table was created by our group)
Service Life Up to 750 cycles Up to 1,000 cycles
Table 9 - Numerical Comparison between NiCD and NiMH Batteries
(Table was created by our group)
A/V System
To optimize our robot's functionality, we decided to minimize the camera's size and avoid sending video and audio through our XBee due to high data transmission requirements Instead, we opted for a camera with an integrated transmitter, which will allow us to receive signals on the other end Initially concerned about the cost, we discovered several affordable small cameras capable of transmitting up to 100 feet, perfectly suited for our needs.
Research is needed to determine how to capture the live video feed from a camera and display it on an iPhone The camera outputs a composite video signal via RCA cable, and we concluded that using a video capture card connected to a computer would be the most efficient method This setup enables us to transfer the audio and video feed from the camera to the computer, facilitating distribution to the iPhone.
After connecting the video capture card, it can be utilized with various open-source software options, including VLC and ffmpeg These programs are ideal for capturing video sources, enabling both viewing and streaming through multiple protocols The choice of protocol hinges on compatibility with the iPhone, and we will evaluate which software delivers the best performance for our project Our primary focus is on achieving real-time encoding and streaming of the video feed, so the program that excels in these aspects will be selected for use.
The iPhone's media player supports a limited range of video formats, including mov, mp4, mpv, and 3gp, primarily for pre-recorded content To enable live video streaming, Apple introduced HTTP Live Streaming, which segments MPEG2 Transport Streams into smaller chunks and generates a playlist file for efficient playback This method allows clients to access the most recent video segments by reloading the playlist Apple offers the mediastreamsegmenter command line tool to facilitate the process of creating these file chunks and the playlist on an HTTP server For compatibility with iPhones, videos must be encoded in H.264 with AAC audio While this solution appears promising for live streaming projects, the technology is relatively new, resulting in limited documentation available.
An alternative approach for streaming video to an iPhone is to utilize a direct mp4 stream from the gateway machine However, this method requires careful attention to the video's format, as the iPhone only supports specific resolutions and encodings; any deviation will result in playback failure If successfully implemented, this method could simplify the process by eliminating the need for HTTP live streaming.
Software
This section outlines the various software components of our robot, highlighting the necessity for seamless communication between the robot and its controller (iPhone) To enhance clarity, we have organized each software segment into distinct subsections.
Software Communication
In our robot design, we chose to use an iPhone as the controlling device, which presented challenges in establishing communication between the iPhone and the robot The key data elements that need to be exchanged include GPS data and video feed transmitted from the robot to the iPhone, as well as control signals sent from the iPhone to manage the robot's movements.
We chose XBee modules for communication due to their compatibility with the Zigbee (IEEE 802.15.4) network, a popular choice in the robotics community However, interfacing these modules with an iPhone posed challenges, as it would require using Apple's proprietary data port To achieve certification as “Works with iPhone,” we would need to submit a functional device to Apple for approval, a process we ultimately decided to forgo.
Due to the inability to connect an XBee module directly to the iPhone, we opted to utilize the iPhone's 802.11 WLAN connection to communicate with the robot's board Although we explored various embedded Linux boards, we ultimately decided against them to maintain sufficient design complexity Instead, we established a connection where the iPhone communicates with a server via 802.11, which then translates and relays data to an attached XBee module This XBee module facilitates the transmission of GPS and control data to the corresponding module on the robot However, for the video feed, we chose an alternative method, as the Zigbee protocol's bandwidth limitations were inadequate.
We are exploring pre-built wireless video transmitter and receiver systems to integrate with our setup Our plan includes purchasing a video camera equipped with an antenna, which will be mounted on the robot The receiver module will connect to our server through a video capture card, enabling the video feed to be streamed in MP4 format to an iPhone.
iPhone Development
The iPhone app serves as the main control interface for the robot, presenting challenges in enabling user control and streaming live video from the robot to the device.
There are two primary methods for developing applications for the iPhone: creating a web app or a native app Web apps are essentially websites optimized for iPhone use, which users can access by opening Safari, the device's built-in internet browser This approach allows users to navigate to a third-party hosted site, making web apps a flexible option for iPhone application development.
Hosting a website on a local computer limits accessibility, and while web apps are cost-effective and platform-independent, they fall short in utilizing the full capabilities of the iPhone Specifically, web apps lack access to essential hardware features like multi-touch gestures and the accelerometer, which are crucial for enhancing user interaction, such as controlling a robot through finger slides This limitation highlights the challenges of web app development compared to native apps that can fully leverage the iPhone's functionalities.
Building a native iPhone application offers significant advantages, including access to the iPhone’s multi-touch gestures and hardware functionalities that web apps cannot utilize However, this approach requires development on a Mac, as the iPhone SDK is exclusively available on Mac OS X Additionally, to install the application on a physical iPhone, a $99 iPhone developer account must be purchased Ultimately, we concluded that the power and flexibility of a native iPhone app outweighed these drawbacks, leading us to abandon the web app concept.
After deciding to develop a native iPhone application, we faced the challenge of designing the GUI The iPhone offers various design options, each with its own advantages While creating an OpenGL ES application would provide maximum customization for our GUI, our team's lack of experience with OpenGL and its primary focus on game design led us to abandon this approach in favor of simpler alternatives.
One option for app development is creating a Utility app, which is available as a template in XCode for new iPhone applications Utility apps feature a dual-sided "card" design, where one side displays primary data and includes a button for flipping the card Tapping this button triggers a smooth animation, revealing the back side that typically contains settings and configuration options While this design is visually appealing and user-friendly, it may not provide the complexity required for our specific needs.
We needed something more involved and more of a menu based application.Again, we had to discard this option, as it was not as flexible as we needed it.
Figure 18 - “Card” Feature of Utility App
(Figure created by our group)
The third option for our application development was to create a view-based application, which, unlike the utility template, offered greater flexibility This approach enabled us to design multiple views, essentially functioning as individual screens, allowing users to navigate between them by tapping buttons Although the view-based app was more complex to design compared to the utility application, it ultimately provided the necessary flexibility for our needs, making it the most suitable choice for our project.
After selecting the type of application to develop, we focused on determining the control method for the robot We identified three potential solutions for implementing the control scheme on the iPhone The first approach involved positioning a UIImageView control from the Cocoa Touch library at the center of the screen, flanked by two UISliders on either side.
Figure 19 - Prototype of the First iPhone App
(Figure created by our group)
The UIImageView is intended to display a continuous video feed from the robot, with plans to update the control with new JPG images from the server at approximately 30 frames per second for a smooth viewing experience While the design performed well on the iPhone simulator, achieving a fluid video feed, testing on the actual iPhone revealed a disappointing update rate of only 2-3 frames per second This significant discrepancy necessitated the search for a more effective solution.
To enhance our project, we considered utilizing the built-in MPMoviePlayer from the Cocoa Touch library for streaming mp4 videos However, this player requires full-screen usage, preventing it from being embedded in a view As a solution, we proposed replacing the UISliders from our initial design with the iPhone's powerful accelerometer, which can detect phone orientation across all three axes This allows for precise control of the robot's movements—left, right, forward, or backward—based on the phone's tilt However, maintaining a balanced position is crucial; any imbalance results in unintended robot movement, making the video feed experience less fluid and intuitive.
(Figure created by our group)
We developed a third solution that merges the advantages of our previous designs by leveraging the MPMoviePlayer class to overlay custom controls on the video This approach maintains the smooth playback from the second design while incorporating the sliders from the first design The final iPhone control design is illustrated in Figure 21.
Figure 21 - Prototype of the MPMoviePlayer Class
(Figure created by our group)
The iPhone app was designed to include a map feature that allows users to drop pins for both their location and the robot's position, calculating the distance between the two The ideal choice for this functionality was the built-in MapKit framework, which is supported by Google and Apple This framework enables the placement of pins based on latitude and longitude coordinates and offers user-friendly features such as swipe-to-scroll and pinch-to-zoom, effectively meeting our mapping needs for both the user and the robot.
To enhance the security of the ARMORD system, we evaluated various options, including a numeric keypad, a username/password combination, and biometrics After assessing the capabilities of the iPhone, we concluded that implementing biometrics, such as facial recognition, was beyond our group's expertise Therefore, we opted for a username and password login system, which allows for multiple users with varying security levels, effectively meeting our security requirements.
Gateway Application
The gateway application serves to connect the iPhone to the robot by utilizing XBee modules for data transmission Due to the lack of a straightforward method for direct communication between the iPhone and the XBee modules, we opted to use a computer as an intermediary to facilitate this connection.
The gateway application needs to effectively send and receive data via a TCP connection from the iPhone and relay it to XBee modules through a serial port To achieve this, we considered various programming languages, ultimately choosing Microsoft’s C# and Sun Microsystems’ Java, as our team members have experience with both and they meet the requirements for developing the gateway application.
Both C++ and Java offer object-oriented features and existing classes for accessing and sending data over TCP connections and serial ports, making them suitable for building the gateway application However, to facilitate communication with XBee modules, we initially considered using C++ due to prior experience within our team Upon further investigation, we discovered that both Java and C# have existing XBee APIs, which simplify communication by handling lower-level serial communication tasks This allowed us to focus on higher-level data transmission without worrying about configuration details Ultimately, we based our decision on which API would be the most user-friendly and effective for our project.
After comparing the APIs, we found that the Java API offered more complex features but was less user-friendly than the C# API, which had superior documentation and clear examples Consequently, we decided against using the Java API We then explored the C++ method for interfacing with the application, which appeared to be the easiest option due to our team's prior experience, allowing us to quickly develop a working program Our approach involved creating a C++ program to communicate with the XBee and a C# program to connect with the iPhone However, this raised new challenges regarding the integration between the C# and C++ applications, particularly in transmitting control data from the iPhone to the C# app, then to the C++ app, and finally to the XBee for robot communication Further research was necessary to establish this connection effectively.
Microsoft's documentation on Interprocess Communications presents several effective methods for enabling communication between two processes One innovative approach involves utilizing the system clipboard as a messaging system, allowing a C# application to paste data—such as timestamps and wheel speed values—into the clipboard A C++ application can then poll the clipboard at short intervals to retrieve the latest data However, to address the challenge of one application potentially overwriting the other's data, both applications could implement a strategy of cutting data from the clipboard, ensuring that it remains empty after use This method would enable each application to check for existing data before writing, effectively preventing data conflicts.
The document also referenced DCOM, a Microsoft standard for data transmission between compatible applications However, after researching this unfamiliar area, our group concluded that implementing DCOM would be overly complex and unnecessary Ultimately, we agreed that any form of interprocess communication would be excessive for our needs.
After eliminating the option of using multiple applications, we faced a decision between C#, C++, and Java for our project Given our limited experience with Java and the complexity of the XBee module API, it became the least favorable choice We recognized that C# would facilitate GUI development more easily than C++, thanks to Visual Studio's intuitive drag-and-drop designer, which automatically generates code However, implementing a GUI necessitates a multithreaded approach, where one thread manages the user interface while another handles background communication with the XBee and iPhone systems.
The NET framework simplifies asynchronous data reception using the TcpListener class, which enables applications to listen for connections on any port Once a connection is established, the BeginReceive method allows the application to wait for incoming data asynchronously This method requires several parameters, including an AsyncCallback delegate that triggers a user-defined function upon completion of the data reception process For example, the code snippet illustrates how to implement this: `sock.BeginReceive(dataBuffer, 0, dataBuffer.Length, SocketFlags.None, new AsyncCallback(OnDataReceived), null);`.
Figure 22 - Code to Begin Waiting for Data Asynchronously
The OnDataReceived method acts as a callback function triggered when data is ready to be read, executing on a new thread However, since the NET framework restricts GUI modifications to the thread that created it, we must use a custom delegate to safely transfer data from the background thread to the GUI thread This delegate allows the background thread to invoke a method with the received data, ensuring that the GUI components are updated safely and in compliance with multithreading standards.
After evaluating C++ and C#, we found both languages suitable for our project While C# offers advantages like a graphical user interface (GUI), it also presents challenges with multithreaded GUI management Additionally, C++ allows for TCP connections, enabling us to connect to the iPhone through a C++ application Given our proficiency in both languages, we ultimately opted for C++ due to its superior portability Unlike C#, which requires a Windows machine and the latest NET framework, C++ can be run on any operating system, making it a more versatile choice for our needs.
We also realized that the gateway application did not need a GUI, as there would be no direct user interaction with it We decided to leave it as a console application
The ARMORD system necessitates a computer and router to facilitate communication between the robot and the iPhone In this article, we will explore various hardware options and evaluate their suitability for our project The gateway serves as the intermediary between the robot and the user, processing all information exchanged.
Gateway Hardware
After deciding to use C++ as the primary language for our gateway application, we faced the challenge of selecting suitable hardware and an operating system We considered various options, including desktops, laptops, and servers Ultimately, a server was deemed unnecessary due to its excessive computing power for our needs Given our goal of remote control via an iPhone and the need for mobility, a laptop initially seemed ideal However, the requirement for a serial port, which most laptops lack, led us back to the desktop option Despite the decreased portability, we concluded that a desktop was the most viable choice, especially since our router would also need a wall outlet, further limiting mobility.
After selecting our hardware, we needed to evaluate the available operating systems: Mac OS X, various Linux distributions, and Windows Since we had machines with each OS, our choice depended on compatibility with our project rather than availability Initially, we considered Mac OS X but quickly dismissed it due to our lack of experience with C++ on Macs and the absence of a serial port for connecting to the XBee module This led us to focus on Linux and Windows, both of which were installed on our machines and offered the necessary ports for our project Ultimately, both operating systems presented viable options for our needs.
The team opted for a Windows machine for the gateway application, as it was the platform they were most familiar and comfortable with Given that hardware resources were not a concern, they determined that a standard Windows computer would suffice, eliminating the need for a low-power or high-performance system.
Router
To establish a wireless network between our gateway computer and iPhone, we required a router that supports networking standards such as 802.11 A, B, G, and N Although 802.11N offers superior data transfer rates and range, the iPhone lacks the necessary internal radios to utilize this standard Consequently, we opted for the widely used 802.11G standard, which is compatible with the iPhone and most laptops This choice allows us to easily integrate any home router into our communication system, providing a practical solution for our networking needs.
Capture Card
Utilizing a capture card is essential for our project, as it allows for efficient video input without the processing overhead associated with most graphic cards and some motherboards We have decided to invest in a standalone USB capture card, as opting for a PCI card would restrict us to desktop use Our research has led us to a USB capture card that simplifies the process: we connect the video output from the receiver to the capture card, which then links to the USB port of the gateway For our project, we will be using the capture card depicted in Figure 23 to capture live video effectively.
Figure 23 - ATI TV Wonder 600 Video Capture Card
(Figure was created by our group)
When selecting a capture card, we prioritized a wide range of video outputs to ensure compatibility with the iPhone, specifically requiring support for mp4 format Additionally, we sought a capture card with audio input capabilities to accommodate our camera's simultaneous audio and video output To optimize our streaming experience, we also looked for features that allow us to control the encoder's buffering, minimizing lag during transmission.
Design
In this section, we outline the comprehensive steps involved in designing each subsystem of our project Due to varying schedules, we primarily worked on our individual components from home Fortunately, we had access to all the necessary tools to effectively construct our portions of the robot.
Board Components
This section focuses on the design of electronic board components for our system, which includes both custom-made PCBs and purchased units A significant challenge we faced was ensuring seamless communication between these units In the following subsection, we will detail the assembly and integration process of these components.
Microcontroller
We installed the microcontroller on a standard printed circuit board using a retention contact, allowing for easy replacement without soldering This method ensures that if the microcontroller malfunctions, we can quickly switch it out without the hassle of desoldering, maintaining the integrity of the board.
After soldering the microcontroller to the board, the next crucial step is powering the device It's essential to connect a voltage source ranging from 3.3 to 5 volts to the VCC pin on the microcontroller, while also linking the GND pin to the ground of the voltage source Using an improper power supply can lead to inconsistent signals and unstable voltage, which can adversely affect the microcontroller's performance.
To clean up the signal, we begin by applying a higher voltage of 9 volts through a filtering capacitor After filtering, the signal is processed by a voltage regulator to reduce the voltage to the desired level The completed power supply circuit is illustrated in Figure 24 below, and the output voltage is expected to be within an acceptable range of 100 mV.
Figure 24 - Power Supply to Microcontroller
(Figure printed with permission by Sparkfun)
Motor Controller
Instead of opting for a standard H-Bridge for our motor control, we chose the Sabertooth motor controller to ensure the stability and reliability essential for our robot's movement.
The purchased motor controller enables control of motors through analog voltage, radio control, serial, and packetized serial connections Connecting the motors is straightforward, with motor 1 linked to M1A and M1B terminals and motor 2 to M2A and M2B terminals It supports battery connections of up to 24V via B+ and B- terminals Notably, this motor controller features battery recharging when motors are idle and facilitates rapid stopping and reversing For our specific setup, which operates in non-lithium mode with microcontroller R/C input and independent mode with exponential control, the switch settings on our robot are illustrated in Figure 25.
(Figure printed with permission by Sparkfun)
After setting up the motor controller with the motors, we will connect our microcontroller to the motor controller, utilizing widely available robotics code to ensure compatibility with our setup Data transmission between the microcontroller and the motor controller will be achieved through packetized serial communication The output from our robot will be linked to the S1 input terminal of the Sabertooth motor controller, which features an LED status indicator for real-time monitoring of motor performance and error alerts For reference, a pinout diagram of the Sabertooth is provided, illustrating the straightforward connection process for the microcontroller.
Figure 26 - Sabertooth Pin-out Diagram
(Figure printed with permissions by Sparkfun)
XBee Controller
We have acquired two versions of the XBee controller essential for our project: the XBee Explorer, which will connect to our robot, and the XBee Explorer USB, designed for connection to our gateway.
The XBee Explorer is an excellent choice for integrating a Zigbee module into a robotic system, providing easy access to essential pins for power, transmission, and reception Its built-in LEDs offer valuable feedback on device status, enhancing usability Additionally, the XBee Explorer can accept 5V input and efficiently regulate it to the 3.3V required for operation We will connect the controller's Dout to the microcontroller's Receiving line (Rx) and the Din to the Transmit line (Tx), facilitating seamless data communication between the microcontroller and the Zigbee module, as illustrated in Figure 27.
(Figure printed with permission by Sparkfun)
The XBee USB Explorer, which features a USB port instead of pin outs, simplifies the connection to a computer for programming as a communications port This device is essential for configuring Zigbee modules with XCTU software, enabling users to modify critical settings such as PAN ID, End Address, and Start Address.
(Figure printed with permission by Sparkfun)
Chassis
In this section of our design process, we focus on the robot's body, which encompasses the drive train, outer and inner structures, and shock absorbers Instead of opting for a pre-made chassis, we chose to create a custom design to ensure a perfect fit for our electronic components.
Drive Train
ARMORD will utilize a rear-wheel drive system featuring two free-spinning front tires, powered by a pair of IG32P 24V DC motors rated at 265 rpm with a 1:19 gear reduction While this configuration does not allow for a full 360-degree turning radius, it reduces overall costs Anticipating that the wheels and axle will endure significant impact from drops, we prioritized a robust drive train and suspension system design Our suspension system, depicted in Figure 29, and drive train in Figure 30, include a faux motor cylinder at the front to maintain even weight distribution This design incorporates springs to absorb impact forces, reducing stress on the motors and shafts To minimize radial load on the motor shafts, the wheels are mounted as close to the motors as possible, while the aluminum frame surrounding the suspension can also support our electronics.
(Figure created by our group)
(Figure created by our group)
Body Design
The body serves as the crucial first line of defense for our system, requiring careful design to accommodate essential features such as cameras, ventilation, and antennas Additionally, we must consider basic functionalities like wheel clearance and height restrictions An efficient removal system for the enclosure is also vital to facilitate repairs and battery charging.
The enclosure was built in two nearly identical halves, starting with a foam "plug" created from layered extruded expanded polystyrene, which was shaped and sanded to meet size specifications A mold release agent and tin foil were applied over the foam, followed by a coat of polyester resin Once the resin became tacky, two light layers of fiberglass mat were added, and after drying, the fiberglass mold was removed This mold was then treated with mold release agent, and an additional five layers of fiberglass mat and one layer of fiberglass woven cloth were applied for enhanced strength, allowing for the creation of the top and bottom halves of the system.
After creating the initial enclosure pieces, we proceeded to modify them for the system's peripherals and extremities We cut small holes for the motor shafts to attach the wheels, followed by openings for the camera and ventilation system To streamline fabrication, we only cut holes on one half of the enclosure Given the curved surface, we reinforced the areas for fan mounting with additional layers of fiberglass and resin Finally, we applied a tough Polyurea spray to enhance the durability of our components.
We sealed the camera hole with a 1/8” piece of Lexan (Plexiglass) and weather stripping to protect against the elements, while ventilation fans were equipped with steel mesh to block large debris The enclosure was securely bolted to the frame using reinforcement straps, as illustrated in Figure 31 Additionally, weather stripping was added along the enclosure's perimeter to keep debris and water out, with the final design showcased in Figure 32.
Figure 31 - Attaching ARMORD’s Enclosure to the Frame
(Figure created by our group)
Figure 32 - Front and Side View of Enclosure Design
(Figure created by our group)
Shock Protection of Electronics
To minimize vibration in our robotic systems, we employ rubber washers at all attachment points between components and the frame Our innovative approach to mounting electronic boards on the aluminum frame significantly enhances electronic protection This technique involves cutting an opening in the aluminum sheet that is slightly larger than the board, allowing springs to connect the board's contact points to the frame These springs, secured with bolts and rubber washers, effectively absorb impacts from falls and collisions, safeguarding the electronics while leaving ample space for additional component mounting.
Figure 33 - Method to Mount Electronic Boards to the Frame
(Figure created by our group)
We are introducing a secondary mounting system designed for smaller electronics and components, which, while less complex than our primary method, still ensures effective shock absorption This innovative solution features bolts encased in springs, strategically placed at both the top and bottom of the aluminum chassis to securely suspend the components As illustrated in Figure 34, this mounting system is versatile and can accommodate batteries, voltage regulators, and various secondary electronic devices.
Figure 34 - Illustration of Secondary Mounting Solution
(Figure created by our group)
Cooling and Ventilation System
ARMORD's ventilation and cooling system features four mini PC fans, comprising two intake and two outtake fans, all operating at 12V and powered by a battery connected to the control circuit An independent switch allows for separate control of the ventilation system, enhancing power efficiency during idle testing periods when cooling is unnecessary This independent control also facilitates stealthier navigation in various environments, while the fan units are designed for ultra-quiet operation The effectiveness of this ventilation system in cooling internal electronics and motors is illustrated in Figure 35.
Figure 35 - Illustration of ARMORD’s Ventilation System
(Figure created by our group)
To ensure optimal performance and safety of our robot's sensitive electronics, we will utilize three DS18B20 digital temperature sensors strategically placed at the front, back, and middle of the interior These sensors offer an accuracy of ±0.5 Celsius across a temperature range of -55 to 125 Celsius By averaging the readings from these three sensors, we can obtain a precise measurement of the internal temperature, preventing it from reaching hazardous levels Figures 36 and 37 illustrate the temperature sensor and its correct circuit implementation, respectively.
Figure 36 - DS18B20 Digital Temperature Sensor
(Figure printed with permissions by Sparkfun)
Figure 37 - Correct Circuit Implementation of DS18B20
(Figure printed with permissions by Sparkfun)
GPS System
Our GPS system will be both used for locating our robot in case of a communication failure and for following it on a map application.
GPS Module
We have selected the Falcom FSA03 GPS unit, which comes with a convenient breakout board for easy setup To connect it, we will attach our 3.3V power source to the VCC pin and the Ground to the GND pin on the breakout board Additionally, we will link the TXL (transmit line) to the USART receive line and the RXL (receive line) to the USART transmit line, enabling seamless data communication between the microcontroller and the GPS unit The pin layout for the breakout board is illustrated in Figure 38 below.
A/V System
Our project's audio and video system enables real-time transmission of live video and audio feeds from the robot to an iPhone This system integrates several smaller components, as illustrated in Figure 39 The process begins with the robot's camera capturing the audio and video stream, which is then sent to a paired receiver station The receiver station outputs RCA cables connected to a video capture card, allowing us to process the audio and video on a computer Once the video is properly encoded, the data is transmitted to the iPhone via a secure 802.11 connection.
Figure 39 - Overview of the AMORD AV System
(Figure created by our group)
Camera
Initially, we planned to use our Zigbee module for transmitting frames from our camera; however, the excessive communication overhead led us to opt for a camera with its own transmitter and receiver This decision alleviated concerns about communication overhead We chose this particular camera for its compact size, offering an excellent balance of range and affordability Additionally, it features the capability to transmit both audio and video, enhancing its functionality Figure 40 showcases the size and design of the camera we will be implementing.
(Figure created by our group)
This camera offers the advantage of operating on a 9V power supply instead of 12V, simplifying testing with just a 9V battery and potentially extending battery life due to lower current draw However, it has a communication range limitation of 150 feet, which falls short of the desired 300-foot radius To transmit video, we will connect the receiver to a computer using a universal serial bus (USB) capture card, designed for capturing and streaming live video To ensure a smooth user experience, we must minimize buffering to reduce delays, although some latency may still occur The streamed video will be accessible through a self-programmed application or a user-friendly webpage.
Mounting the camera presents a unique challenge, as it cannot be attached like other components due to the added suspension, which may cause unwanted "jerky" motion in the footage Instead, the camera will be securely fastened directly to the aluminum top plate using bolts To enhance shock absorption and minimize bounce, a rubber mat will be placed between the camera and the aluminum surface Additionally, it is crucial to ensure a permanent fixation for optimal stability.
Video Processing
Once the video signal is captured by the video capture card, the processing begins We utilize Videolan's VLC application to access the video/audio feed from the capture card, enabling us to stream data to other computers using various protocols In our setup, we encode the video in H264 and the audio in AAC, encapsulating it in an MPEG-2 Transport Stream (MPEG2-TS) This stream is then sent back to the local machine over a UDP Unicast stream (127.0.0.1:1234) for further processing, ensuring the video/audio can be played on the iPhone.
We chose to implement Apple’s HTTP Live Streaming protocol, as it was the most effective way to deliver a real-time feed to the iPhone.
The "mediastreamsegmenter" command line tool processes MPEG2-TS output from VLC, preparing it for iPhone distribution by segmenting the live stream into smaller video files and generating a main playlist file This ensures the client can easily identify the latest video/audio streams To execute this application, use the command: `mediastreamsegmenter –f ~/Sites/stream –D –t 2 –s 3 –S 1 127.0.0.1:1234`.
This command captures the UDP stream from 127.0.0.1:1234 and segments it per the HTTP Live Streaming protocol, saving the segments and index file to "~/Sites/stream," the location of our HTTP web server The "-D" option removes old or unused segments as needed, while "-t 2" sets each segment's duration to 2 seconds Additionally, "-s 3" ensures that three segments are always listed in the index file, and "-S 1" instructs the program to wait until one file is written before generating the index file.
Upon launching the application, the stream undergoes active segmentation, and the index file is continuously updated Clients compatible with this protocol can download the m3u8 index file from the web server via HTTP, allowing them to start watching the live stream This process is illustrated in Figure 41 below.
Figure 41 - HTTP Live Streaming Used in our Project
(Figure created by our group)
Also, a side advantage of using this method is that since the stream is placed on an HTTP web server, any client can access the feed, not just the iPhone.
Power Supply
ARMORD will operate using two rechargeable NiMH batteries, with the drive circuit powered by a 24V 2200 mA/h 20 cell battery and the control circuit by a 12V 2200 mA/h 10 cell battery The drive battery will be connected to a double pole, double throw switch, allowing it to be wired in parallel with the motor controller's inputs when the switch is in the “On” position, ensuring adequate power supply for the two 24V motors.
DC motors utilize a control circuit wired in parallel with multiple voltage regulators to supply adequate power to various peripherals and electronic control devices When in the "Off" position, both circuits connect in parallel to a charging plug, allowing for convenient connection to an AC charger The schematic representation of this power system is illustrated in Figure 42.
Figure 42 – Schematic of Power Supply and Distribution
(Figure created by our group)
Software
The robot communication system consists of three key components: the iPhone application, the server, and the robot These elements are interconnected, facilitating the exchange of data to and from the robot An overview of this integrated system is depicted in Figure 43.
Figure 43 - Data Flow Throughout the System
(Figure created by our group)
The iPhone app serves as the main interface for controlling the robot, allowing users to view its location on a map and adjust its movement via two sliders that regulate the left and right wheel speeds These slider values are transmitted over a WLAN connection to a server, originally with two decimal places, which resulted in excessive data transfer To optimize performance and reduce network traffic, the system was modified to send only values that are multiples of 10 This adjustment not only enhances efficiency but also addresses a common issue with touch controls, where slight misalignments in thumb placement could cause unintended turns By implementing this "buffer-zone" of 10 units, users can maintain a straight trajectory even if their thumbs are slightly off, ensuring a smoother operation of the robot The data packet sent consists of a simple three-character string, with the first character indicating the left or right wheel control.
The letter 'R' signifies the modified wheel or slider, followed by two digits that denote its current value, which ranges from 0 to 90 This information is processed by a paired module on the robot, allowing the onboard system to control the robot's movement in the intended direction.
To enhance user experience, the robot is equipped with a camera that wirelessly transmits video feedback to an iPhone app, allowing users to monitor its movements The camera's video feed is sent to a paired receiver with standard RCA outputs, which connects to a video capture card on the server This setup enables the server to stream the live video feed to the iPhone app via the network, ensuring real-time visibility of the robot's path.
iPhone Application
The robot is primarily controlled through an iPhone application developed using the Cocoa Touch library in Objective-C within the XCode IDE Key features of the app include a live video feed with slider controls for robot movement and a GPS map displaying the locations of both the robot and the iPhone This data is transmitted over an 802.11 WLAN network.
The iPhone app features a graphical user interface (GUI) built with several view controllers and a navigation controller that sequentially displays screens to the user This design effectively guides users through the connection process, ultimately leading them to the main screen where they can view the map or take control of the robot A prototype of this iPhone GUI is illustrated in Figure 44.
Figure 44 - Prototype of the iPhone GUI
(Figure created by our group)
The app opens with a splash screen featuring a "Begin" button Once clicked, it transitions to a connection screen where users can enter the server's IP address and port Pressing the connect button initiates a standard TCP connection attempt to the server If the connection times out, a relevant pop-up message alerts the user.
Upon establishing a connection, the iPhone transmits an initial data packet, which the server verifies by responding with a matching packet If the iPhone fails to receive a correctly formatted response, it detects an issue and terminates the connection Once a successful connection is confirmed, the iPhone app displays the main interface, allowing users to view the robot's location on a map or access its drive system for control.
The MKMapView class in Cocoa Touch is utilized to display a map that shows the user's current location, obtained through the Core Location framework, alongside the robot's position reported by its onboard GPS After loading, the map zooms in to optimally display both locations, dropping pins at each coordinate Additionally, the distance between the two points is shown on an overlaid UILabel.
The robot control system incorporates elements from the Cocoa Touch libraries, specifically utilizing an MPMoviePlayer to showcase the live feed from the robot's camera Additionally, it features two UISliders positioned on either side of the video display, enabling users to easily manipulate the left and right wheel movements of the robot.
iPhone applications are developed using the Model-View-Controller (MVC) design pattern, which organizes each screen into two distinct files The XIB file represents the "View," containing the graphical elements and layout of the user interface In contrast, the controller file, written in Objective-C, acts as the "Controller," managing user interactions with the GUI and executing necessary actions The "Model" component, responsible for data storage, is minimal in this application, as most data is transmitted over the network.
For our iPhone application, each screen will need its own UIViewController subclass tied to a XIB file Each view controller, GUI component on that
The ArmordAppDelegate class, which implements the UIApplicationDelegate protocol, serves as the root class of the app, facilitating communication between the iPhone OS and the application It is responsible for loading the main application view and features a UINavigationController object that manages a stack of screens, enabling smooth navigation by allowing developers to easily push and pop UIViewControllers on the iPhone's display This functionality is essential for creating a seamless user experience in iPhone app development.
The application lifecycle of an iPhone application is unique Other operating systems allow multiple applications to be running at once However, the iPhone
The iPhone OS operates by allowing only one application to run at a time, creating a sandboxed environment for each app Upon launching an application, the OS activates it and the Armord App Delegate class, adhering to the UIApplicationDelegate protocol, initializes the main window and displays the initial view, typically a splash screen User interactions are relayed from the OS to the active screen, which manages touch events accordingly After processing an event, the application returns to an event-wait loop, ready to handle additional user inputs.
Figure 46 - UML Sequence Diagram of iPhone application
(Figure created by our group)
Each loaded screen in the application triggers the viewDidLoad method, which is essential for initializing elements when the view appears Conversely, the viewDidUnload and dealloc methods are invoked during the view's destruction, allowing the view controller to execute finalization tasks In this context, these methods are utilized to set up crucial fields, such as the IP Address and Port of the gateway application, ensuring a smooth user experience as they navigate through the app.
To push a view controller onto the navigation controller, first, create and initialize the view controller using the line `vcSettings *vc = [[vcSettings alloc] init];`, which links it to a XIB file, ensuring that the GUI elements are properly set up during the initialization Next, call the `pushViewController:animated:` method on the navigation controller, passing the newly created view controller (`vc`) as the parameter and setting the animated Boolean to true for a smooth transition as the new view controller slides onto the screen.
[self.navigationControllerpushViewController:vc animated:YES];
Figure 47 - Pushing a View Controller onto the Screen
The last line of the code snippet releases the local copy of the view controller, as the "pushViewController:animated:" method retains its own instance After the view controller is displayed, it can be removed from the screen by invoking the "popViewControllerAnimated:" method of the UINavigationController, which effectively returns the user to the previous screen.
Another option to push view controllers onto the screen is to use the
The `presentModalViewController:animated:` method in the UINavigationController class allows for the temporary display of modal view controllers, which slide up from the bottom of the screen, unlike traditional view controllers that slide in from the left This functionality is commonly utilized in iPhone applications for quick tasks, such as editing or entering information For instance, when users tap the IP Address or Port buttons on the main screen, a simple input view with a textbox and keyboard appears, enabling data entry A "done" button located in the top right corner dismisses the modal view controller, differentiating it from the `pushViewController` method, which creates a sibling view controller instead.
The IP Address and Port fields conveniently remember the last used IP address and ports entered by the user, eliminating the need for reentry each time the app is launched.
The iPhone OS enables applications to store data in the /Documents folder, ensuring that this data remains intact even after the app is closed or reinstalled We opted to use a plist (property list) file for data storage, which is an XML format that holds key-value pairs This method is effective as plist files support basic data types such as strings, integers, arrays, dates, and data By loading the data into NSDictionary or NSMutableDictionary objects, we facilitate straightforward read and write operations In our implementation, the plist file will specifically contain two key-value pairs: one for the IP address and another for the port.
Microcontroller Code
The microcontroller code, developed in C++, simplifies the programming process compared to assembly or MIPS Its primary function is to manage the robot's operations and transmit data back to the gateway.
The microcontroller operates with a semaphore function that enforces two distinct states, ensuring that when one state is active, the other cannot run This design prevents data corruption by restricting both states from simultaneously accessing the program's critical section Although there are two concurrent threads executing functions, the semaphore mechanism guarantees that only one thread operates at any given time Despite potential concerns about delays, the application’s capability to execute millions of instructions per second mitigates this issue Figure 54 illustrates the system's functionality.
(Figure created by our group)
The initial phase of the program is the Receive state, where user input determines if one or both sets of wheels should be activated An increment variable facilitates this process, gradually returning to zero when not in use Once the necessary values are gathered, they are transmitted to the motor controllers, which generate pulses to initiate wheel movement.
The Transmit state of the program is responsible for sending the unit's position obtained from the GPS, along with the current motor status, back to the Zigbee device.
The essential part of our microcontroller code stores key values, including a string variable named GPSLocation that continuously tracks our robot's GPS coordinates Additionally, we utilize a structure with two integer variables, LeftMotor and RightMotor, which determine the required acceleration or deceleration of the motors A value of 0 indicates that the corresponding motor is not instructed to move, prompting an exponential slowdown until it comes to a stop.
This ongoing process ensures that our system remains continuously updated during the execution of our program The essential part of our microcontroller features a structure that encompasses all the necessary variables for updating both the motors and the GPS.
Our structure comprises integers, strings, and Booleans, with the variable "Running" indicating the robot's mode; it returns true when the robot is in running mode and false otherwise.
Gateway Application
The server application, developed in C++, facilitates seamless communication between the iPhone and the robot by offering a straightforward method for transmitting data to the XBee module.
Upon starting, the application establishes a TCP listener on the local machine's port and awaits a connection Once a connection is made, it expects an initial message from the iPhone, which is then reciprocated by the server, forming a crucial "handshake" for data transfer This handshake is essential; if the iPhone fails to receive the initial packet from the server, it indicates a connection issue with the robot Consequently, the iPhone app can respond appropriately by displaying a pop-up message, rather than sending packets into a void.
Once the handshake is established, the application is ready to receive data from the iPhone, utilizing a messaging structure based on UTF-8 encoded strings To transmit data via a socket, these strings must be converted into a byte stream for network transmission and then reassembled into strings upon receipt This process includes parsing the strings to extract usable data, especially since the iPhone may send multiple slider values in a single packet After obtaining the usable data, it is forwarded to the XBee functions, which prepare and transmit the information to the XBee module.
The iPhone transmits control data as a string consisting of a letter indicating direction—either left ('L') or right ('R')—followed by a numerical value between 0 and 90 The gateway application processes this data to create the suitable data structure for transmission to the XBee module.
The gateway application not only transmits data to the robot but also receives crucial GPS information from the XBee module, specifically latitude and longitude coordinates This data is formatted into a string for the iPhone by sending one packet at a time, which includes the latitude, a semicolon delimiter, and the longitude The application then converts the UTF-8 encoded string into a byte stream for network transport to the iPhone, where it is decoded back into a readable format for further processing.
Testing
In order to evaluate our specifications and requirements we have developed several testing procedures for key aspects of our system.
Communication Range
We will evaluate our robot's communication range by conducting tests in both ideal and poor radio frequency (RF) environments The ideal test will take place in a vacant mall parking lot, providing an unobstructed line of sight and minimal RF interference Conversely, the poor RF environment will be the University of Central Florida campus, characterized by numerous obstructions and high RF signal levels In both locations, we will identify a stretch of at least 100 feet and drive the robot in a straight line until communication is lost and the vehicle stops The distance traveled will be measured and recorded, with multiple trials conducted at each site to calculate an average communication distance.
Video Communication Range
This assessment will determine the maximum distance the robot can cover while maintaining a stable video signal It will follow a similar structure to the communication range test, focusing specifically on the video transmission capabilities of the wireless camera.
Speed Test
The speed evaluation test will assess our robot's performance on various surfaces, including pavement, grass, and sand A 40-foot stretch of ground will be divided into two 20-foot sections for this purpose The robot will be driven straight down the track, and an evaluator will use a stopwatch to record the time taken to reach the first 20-foot marker This method allows the vehicle to achieve its top speed before measurement Speed will be calculated by dividing the distance of 20 feet by the recorded time To ensure accuracy, multiple trials will be conducted on each surface, and tests will be performed with a fully charged battery to prevent speed reductions due to low power.
Battery Life
Battery life will be assessed in both full load and idle scenarios For the full load test, the robot will be elevated to prevent movement, with voltmeters connected to the batteries while a custom program activates the wheels at maximum speed and the camera The duration until battery depletion will be recorded Subsequently, the idle test will be conducted without wheel movement to measure battery life in a resting state Finally, the results will be compared to our design requirements to ensure validation.
Durability
To test ARMORD’s capability to sustain damage from high elevation and collision we will conduct a series of impact tests.
Drop Test
To evaluate our vehicle's safety for electronics, we will establish a control using a scrap silicon computer board, which we aim to protect from damage We will attach multiple Shockwatch G-Force indicator labels to the board and drop it from a height of three stories, as these labels are typically utilized by shipping companies for packing insurance By using labels with varying g-force thresholds, we can determine the force required to cause damage After identifying the necessary force to compromise the board, we will drop the vehicle from the same height to assess its safety for the electronics inside.
Three stories will be examined to identify which labels have been activated due to the fall Provided that the labels that were damaged during the inspection remain intact, the vehicle's electronics will be considered safe.
Figure 56 - Shockwatch G-Force Indicator Labels of Varying Magnitude
(Figure printed with permissions by Wikipedia)
Shock Test
The collision test will closely resemble the drop test, with the key distinction being that instead of dropping the vehicle, it will be struck with a blunt object This method aims to replicate the effects of colliding with walls and experiencing debris falling on top of the vehicle.
Software Test
To ensure our software components function correctly, we can decompose the system into three primary elements: the iPhone application, the gateway application, and the microcontroller code Each of these applications performs specific functions essential for the overall operation of the system.
The iPhone application is designed to effectively control the robot, providing real-time video feedback and mapping capabilities Users can maneuver the robot using the iPhone's touch interface, adjusting the speed and direction of each wheel for seamless navigation The app allows users to view the robot's perspective in real-time, enabling them to avoid obstacles while exploring Additionally, the mapping feature displays the robot's current position alongside the user's location, offering an approximate distance between them for enhanced control and awareness.
To determine if the gateway application is functioning correctly, we need to ensure it can successfully relay data between the iPhone and the XBee modules This involves connecting both devices to the gateway application and testing the functionality of a slider control that adjusts the wheels The application must capture the slider values from the iPhone and transmit them to the XBee module Additionally, the gateway application should facilitate the transfer of GPS location data from the robot to the iPhone This requires connecting the XBee modules and the iPhone to the gateway app, enabling the robot’s GPS receiver to send data through the microprocessor to the XBee system, which is then captured by the gateway application Successful verification of the GPS location data transmission confirms that the gateway app is operational.
To ensure the microcontroller code functions correctly, it is essential to verify that it consistently captures and transmits GPS and control data for both the left and right wheels This involves connecting the GPS module to the microcontroller to confirm accurate data capture and successful transmission to the XBee module, which applies equally to the control data testing.
User Interface Test
We will assess the user-friendliness of our interface by conducting a survey with 20 randomly selected individuals at the Student Union, where diverse backgrounds and professional fields converge Participants will rate the ease of controlling our robot using an iPhone The feedback gathered will enable us to refine our interface and control mechanisms to better cater to the general population A copy of the survey intended for participants is shown in Figure 57.
Figure 57 - Interface Survey for Use in Testing Ease of Use
(Figure was created by our group)
GPS Positioning Test
We will conduct a test to ensure the onboard GPS module delivers accurate readings by driving our robot to various locations around Orlando Utilizing our GPS interface, we will compare the robot's actual position with its virtual representation While we anticipate some discrepancies, likely a few meters, due to the limitations of consumer-grade GPS, our goal is to confirm the overall accuracy of the system.
Cold Start Test
This test aims to assess the cold start time of our robot by leaving it unpowered for 12 hours Upon powering it up, we will measure the duration required for the system to become fully operational, including the startup of the iPhone and the initiation of communication Conducting this test multiple times will help us calculate an average startup time for more accurate results.
Video Stream Delay
This test aims to evaluate the time delay in our video stream by using two synchronized clocks—one positioned in front of the onboard camera and the other in front of the user viewing the stream on an iPhone By calculating the time difference between these clocks, we can accurately measure the video delay The test will be conducted both while the robot is stationary and while it is moving at full speed, allowing us to assess the delay under different conditions To ensure accuracy, the test will be repeated multiple times to determine an average delay time.
Ventilation and Cooling Test
This test aims to ensure that our robot's internal temperatures align with the recommended operating ranges of its components To facilitate this, we have created Table 12, which outlines the recommended temperatures for several key system components By identifying the components with the lowest temperature thresholds, we establish our testing criteria Should our tests reveal internal temperatures exceeding these criteria, we will need to re-evaluate and redesign our cooling systems.
XBee Pro Wireless Module -40˚ C to 85˚ C
Table 12 - Recommended Operating Temperatures of Components
(Table created by our group)
The onboard camera has a temperature tolerance range of 0˚ to 40˚ Celsius, which sets the baseline for our testing Exceeding these limits could lead to malfunctioning of the camera To monitor temperature variations during different operational scenarios, we will utilize embedded temperature sensors and algorithms Our first scenario involves measuring temperature while the robot idles with ventilation on, conducted both indoors and outdoors The second scenario tests the robot moving at high speed with ventilation on, also in both environments Finally, we will assess the critical scenario of the robot idling with the ventilation system off Should any tests indicate temperatures below 0˚C or above 40˚C, we will explore alternative cooling or heating solutions for the system.
Budget and Financing Analysis
After extensive research on various aspects of our project, we have finalized the components for our end product, which are detailed below Our budget outlines both the projected and actual costs for each item, with a comprehensive breakdown provided in Table 13.
Figure 58 illustrates the discrepancies between our estimated and actual costs, highlighting our accuracy in estimating most items Our final estimated cost was $662.00, contrasting with the actual end price.
$739.00 The only difference in these numbers that could change is that when we test out our robot, something breaks which we will have to replace.
Even though the price of this robot came to $739.00 this is really nothing compared to what most robots cost today The Dragon Runner robot cost over
Our robot, priced at $32,000, incorporates many of the same advanced features as the Dragon Runner, demonstrating our effective budget management and strategic design choices.
We sent many email's about being financed in our project, but either we ended up getting no response back or they decided no longer to finance the projects.
We will be continuing to look for sponsors for our project, so that we can reimburse ourselves for the out of pocket money we have used
Before securing sponsors, we agreed to evenly split the project costs among the three of us to ensure fairness This decision influenced the design and features of our robot; had we been in a better financial position, we would have invested more to enhance its capabilities.
We believe our project has significant profit potential, especially when compared to the Dragon Runner, which costs over $32,000, while we have developed ours for under $1,000 We propose an asking price of $5,000 per unit, which would encompass 200 hours of combined software and hardware engineering work Given our robot's affordability, we anticipate a strong market interest from customers seeking cost-effective alternatives Additionally, we think that our innovative technology could attract defense companies, either as a purchase or through funding for further research and development Our preference as a team leans towards pursuing the latter option.
Final Design Review
While we appreciated various aspects of our design, there are certainly elements we would alter if given the opportunity At the outset of this project, we embarked on our journey with limited knowledge about the diverse components of our robot.
In our robot's hardware design, we initially chose a microcontroller to showcase our learning, but we realized that using an Arduino board would have been a more efficient alternative The Arduino offers multiple built-in connections, eliminating the need for separate Zigbee controllers, motor controllers, and individual camera transmitters With its USB capabilities, we could easily connect USB cameras and network adapters, and even run Linux to stream live video directly from the robot Priced around $150, the Arduino's advantages make it a popular choice in robotics Additionally, we wished to hire a company to custom design our robot's outer and inner shell, which would have ensured structural integrity and allowed us to focus on other aspects of our project.
While the software performance was acceptable, we found the hardware lacking, particularly in terms of robot control Streaming video using the iPhone SDK proved challenging compared to the ease of streaming to a computer We believe that utilizing portable handheld Windows devices, which function like laptops, would have been an ideal solution, allowing us to run simple Windows applications that we are more skilled at compared to the iPhone SDK.
Extensions
We aim to finish our project ahead of schedule and have set specific goals for this possibility Our objectives include enhancing the robot with additional features to boost its performance and usability, contingent upon the availability of time and budget.
One potential enhancement for the robot is the addition of an IR sensor and camera, capable of detecting individuals from up to 20 feet away Upon recognizing a person, the robot will notify the user and switch to a quiet mode, minimizing motor noise The integrated camera will enable night vision without the need for lights, making it particularly advantageous for reconnaissance missions where stealth is essential.
Another extension would be to attach a silenced weapon to the front of the robot.
Utilizing advanced robotics in military missions can significantly reduce the risk to human lives, making it preferable to deploy robots in high-stakes situations Our innovative robot, designed for rapid response and maneuverability, would be an excellent addition to the ARMORD system With the capability to rotate the gun barrel at a 45-degree angle, this robot enhances operational efficiency and effectiveness in critical scenarios.
We aimed to enhance our robot's surveillance capabilities by adding three additional cameras Initially, we planned to install one camera at the back to monitor both the front and rear views Furthermore, we intended to position two more cameras—one on the top and another on the bottom—allowing us to capture a comprehensive view of the surroundings beyond just eye level.
Project Milestones
This section of our documentation is a breakdown of each semester’s milestone
We set our milestones so that we would accomplish enough each term that we wouldn’t have to do anything last minute.
Senior Design 1
In our Senior Design 1 term, we focused on two main objectives: conducting thorough research for our robot's construction and completing our project paper ahead of the deadline From the moment we selected our project, we dedicated ourselves to extensive research Additionally, we aimed to finish our paper early, allowing ample time for revisions to ensure proper spelling and grammar Our commitment to hard work throughout the term enabled us to achieve these goals successfully, as illustrated by our Gantt chart in Figure 59.
Figure 59 - Senior Deisgn 1 Gantt Chart
Our team efficiently researched multiple topics simultaneously by dividing the tasks among various members, which accelerated the process of identifying the right components Regular meetings were held to keep everyone updated on each section's progress, ensuring that all team members gained expertise across different areas.
Winter Break
During our winter break, we aimed to maximize productivity, anticipating the challenges of our final semester before graduation We continued our project seamlessly, treating the break as an extension of the term Our focus was on establishing goals for hardware procurement and assembly, as illustrated in our winter break Gantt chart in Figure 60.
Figure 60 - Winter Break Gantt Chart
Senior Design 2
In the final term of our project, our main focus was on programming the hardware and developing the software components The key tasks remaining included testing and troubleshooting, cleaning up both hardware and software, conducting final tests, and preparing for our final presentation Although this may appear to be a substantial workload, we ensured ample time was allocated for each step, allowing us to complete every aspect of the project thoroughly The timeline for these activities is illustrated in our Senior Design 2 Gantt chart, as shown in Figure 61.
Figure 61 - Senior Design 2 Gantt Chart
During our senior design project, we initially believed we could manage deadlines by simply remembering them and relying on daily interactions with our team However, after a week, it became clear that this approach was ineffective To enhance our organization and collaboration, we implemented a Gantt chart, a strategy we learned in our Object Oriented Software class, to better track our progress and meet our deadlines as a group.
Effective time management is crucial for staying on track and avoiding delays Our Gantt chart provided a sufficient timeframe for each task, allowing us to complete some tasks ahead of schedule This experience has reinforced the importance of using charts and schedules for time management, as relying solely on memory often leads to oversights.
Conclusion
Our group embraced the challenge of the ARMORD project, driven by our passion for robotics, which made the experience enjoyable and fulfilling This project encompassed various engineering disciplines, including mechanical, electrical, and computer engineering, allowing us to apply and expand our skills We diligently researched unfamiliar concepts to ensure a comprehensive understanding, facilitating the design process Ultimately, our final design aligned perfectly with our initial vision, leaving us proud of our achievement.
Many believe that joining a group of friends in Senior Design can jeopardize those relationships, but our experience contradicts this notion Success in teamwork hinges on understanding and effective listening, rather than merely asserting one's own ideas While disagreements are natural, our ability to communicate openly allowed us to reach consensus when necessary This strong communication among us transformed our group into a cohesive team of engineers, proving that collaboration with friends can be both productive and rewarding.
Reflecting on our project, we have no regrets, though there are aspects we would change, such as increasing our budget, adding a fourth team member, and having more time A larger budget would have allowed us to enhance our robot with additional features Balancing work and bills often takes priority, but having an extra mechanical engineer could have improved our robot's dynamic chassis While we wished for more time, we made the most of the limited hours available, utilizing them wisely and efficiently.
Through this project, we gained valuable insights into teamwork and robotics, understanding the balance between leadership and collaboration We realized our limited knowledge of real-world applications, which will improve as we gain experience and tackle more complex projects Additionally, we discovered the underlying mathematics in our electronics classes and the intricacies involved in creating complex circuits Finally, we learned that programming in a classroom setting differs significantly from real-world applications, requiring more thorough planning and research.
Our group possesses valuable insights and skills, and we aspire to collaborate in the future, whether through daily interactions or by assisting each other in overcoming challenges in upcoming projects Ultimately, despite our robot not being flawless, this experience provided us with more learning opportunities than many of our previous courses combined.
“HTTP Streaming Architecture.” Apple Inc 17 Nov
2009.