· Web viewCode reviews and multiple simplification, refactoring, and consistency passes were...

53
SDP 2011 – GROUP X – HAT-TRICK – FINAL REPORT INTRODUCTION Group 2, a.k.a. Hat-Trick built a relatively unusual 3-wheelled holonomic robot, ALF, the Automated Lego Footballer. Although the possibilities entailed by ALFs design were not fully exploited, the robot performed well, losing out to a competitor of similar design and capabilities in a very close semi- final. Goals & achievements, design, development & testing, and an assessment of performance with a view to future improvements, are explored for each facet of the project in turn. TEAM & PROJECT ORGANISATION GOALS & ACHIEVEMENTS Designate team roles and team leader Fully Use sprints to organize team work, and review changes in team strategy Fully Use pair programming to offset differences in ability Parti ally Use websites to communicate information and coordinate group efforts Fully Track members progress with group document Parti ally Use individual group and progress reports to write group report Fully APPROACH Team Organisation is the most critical component of any project. To lead the team, we chose the person with the most experience in software development, and organisational skills to run the project, XXX. To organise actions, we chose the agile-development method Scrum. In Scrum, each week is a “sprint” where team members were assigned responsibilities to execute. The responsibilities were tracked on note cards. At the end of a sprint, team members participated in a show and tell session where each member would describe the activities they worked on. After the show and tell, new responsibilities were assigned. The first thing determined was the relative experience and skill set of each member, and where to allocate them. The members who were adept at Java were assigned to work on Control and members who were adept at Python were assigned to work on Vision. Eventually, the teams were replaced by roles so each played many roles. The team used Google Sites to store pictures of the robot, videos, code conventions, instructions to set up environments, and other information. Additionally, the team used Google Groups to coordinate the writing of the group reports, by members submitting their individual reports to be browsed by the writers of the group report. However, the policy was Page 1 of 53

Transcript of   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were...

Page 1:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

SDP 2011 – GROUP X – HAT-TRICK – FINAL REPORT

INTRODUCTION

Group 2, a.k.a. Hat-Trick built a relatively unusual 3-wheelled holonomic robot, ALF, the Automated Lego Footballer. Although the possibilities entailed by ALFs design were not fully exploited, the robot performed well, losing out to a competitor of similar design and capabilities in a very close semi-final. Goals & achievements, design, development & testing, and an assessment of performance with a view to future improvements, are explored for each facet of the project in turn. TEAM & PROJECT ORGANISATION

GOALS & ACHIEVEMENTS

Designate team roles and team leader FullyUse sprints to organize team work, and review changes in team strategy

Fully

Use pair programming to offset differences in ability

Partially

Use websites to communicate information and coordinate group efforts

Fully

Track members progress with group document

Partially

Use individual group and progress reports to write group report

Fully

APPROACH

Team Organisation is the most critical component of any project. To lead the team, we chose the person with the most experience in software development, and organisational skills to run the project, XXX. To organise actions, we chose the agile-development method Scrum. In Scrum, each week is a “sprint” where team members were assigned responsibilities to execute. The responsibilities were tracked on note cards. At the end of a sprint, team members participated in a show and tell session where each member would describe the activities they worked on. After the show and tell, new responsibilities were assigned. The first thing determined was the relative experience and skill set of each member, and where to allocate them. The members who were adept at Java were assigned to work on Control and members who were adept at Python were assigned to work on Vision. Eventually, the teams were replaced by roles so each played many roles.The team used Google Sites to store pictures of the robot, videos, code conventions, instructions to set

up environments, and other information. Additionally, the team used Google Groups to coordinate the writing of the group reports, by members submitting their individual reports to be browsed by the writers of the group report. However, the policy was inefficient, and it changed to individuals submitting basic statuses of accomplishments.The first repository the team used was the school’s SVN service. We decided on PuTTY and TortoiseSVN to access code remotely. However, due to failures of the school’s SVN service, the team migrated the code to Assembla which was more reliable. The team used Assembla until the completion of the project.

Team Member Primary rolesXXX Testing, visionXXX VisionXXX Vision, agent, strategies &

buildXXX Movement & controlXXX Agent, strategies &

simulatorXXX Vision, robot design &

build, agent & strategiesXXX Team leader, robot design,

build, communications concurrency, & simulator

XXX Collision avoidance & handling, user interface

TABLE : TEAM MEMBER RESPONSIBILITIES

The first report described the activities of team members in essay form. But after feedback, we decided to include a red-amber-green table (RAG) to illustrate the team's progress. RAG tables are a colour-coded table to indicate the status of an item within the table. Green is completed, yellow is partially accomplished, and red is failed.ASSESSMENT

The first week was not perfectly productive due to poor organisation. With organisation, standards and conventions completed, the productivity of the team greatly improved. Scrum increased efficiency. Since show and tell was held each week, the team had constant updates on the progress of each member. Through Scrum, we could identify and resolve issues quickly.

Page 1 of 36

Page 2:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

Occasionally, code written by one member had to be altered by another. Fortunately, coding practices and adherence to convention saved time.Using RAG tables in conjunction with a report increased communication for subsequent reports.SOLUTION ARCHITECTURE

GOALS & ACHIEVEMENTS

Evaluate and choose programming languages that match the team's capabilities and allow good performance

Fully

Create a scalable and modular system FullyEnsure all members set up and use common development environments

Fully

DESIGN

Choosing the right tools and programming languages in the beginning of a project was thoroughly researched and discussed. The skills of the team were the main criteria in making these choices and performance was also important. The table below shows the main parts of the system and the languages, frameworks and libraries picked.

Component Language/frameworkImage processing Python, OpenCVControl & strategies JavaMain user interface Java, SwingNXT and RCX LeJOS, JavaSimulator Java, Phys2D, Swing

TABLE : LANGUAGE & FRAMEWORK CHOICES

The most commonly known language within the group was Java, which is why it was used for the Control, Planning & Strategy part of the system. Programming the NXT and RCX on LeJOS - a Java based replacement firmware for the Lego Mindstorms, was chosen for the same reason.An assumption was made that image processing requires more computation than the rest of the code, running on the PC. Research showed that the OpenCV library was a good choice, because of its optimised vision algorithms . This library supports C, C++, Python and Java. OpenCV is faster and easier to use with C, C++ and Python bindings than with the Java ones. However, the first two were dismissed, because the team was more familiar with Python. Besides, it is a scripting language and does not require compilation, which speeds up development and testing.DEVELOPMENT & TESTING

Language conventions were established and common programming IDEs were used (Netbeans for Java and regular text editors for Python) to

standardise and ease development, testing and understanding of the project. Additionally, the overall structure of the project (Appendix B) was planned and chosen up front so that deadlines could be put and tasks could be planned and re-evaluated in advance.Event-based thread synchronisation was used to minimise delays in concurrency. The different components (e.g. vision and control) were tested separately as they are essentially different applications. However, as development progressed, testing required components to work together. For example, strategies needed feedback from the vision system or simulator, as well as communication with the robot. Additionally, the system was designed to work on Linux, Windows. To ensure that everybody could work independently and efficiently, detailed guides on setting up the development environments were written and posted on our website.ASSESSMENT

Our system allows easy and fast development, because it's modular and universal across the different components. All the main components of the system are entirely separate and could easily be substituted. The communication protocol is standardised across the parts and treats communication equally. However this caused problems as it did not allow easy customisation – the light communications for example could be separately optimised to achieve higher efficiency.ROBOT DESIGN & BUILD

GOALS & ACHIEVEMENTS

A holonomic 4-wheel robot FullyA fully holonomic 3-wheel robot FullyMeets size & robustness requirements FullyKicker powerful enough to reliably score from the opposite side of the pitch

Partially

Page 2 of 36

Page 3:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

DESIGN

The main goal of the group was to achieve full holonomic movement. A holonomic design allows movement in any direction treating all kinds of motion the same way. Orientation of the robot is unimportant. Only the direction of the kicker (forwards) gives it a meaning. Many high-level robot designs were discussed and assessed (Table ). From the many designs we considered, two were thoroughly compared and built – a 4-wheel and a 3-wheel holonomic robot (#3 / #5 and #2). We build #3 with the idea of upgrading it to #5, but after a final assessment (described below) we chose and built #2.The cost of a second NXT was too high, assuming holonomic wheels were also purchased, ruling out dual-NXT designs. A ball-bot design was briefly considered but deemed too ambitious within the robot size constraint.Designs #3 and #5 yield the more limited holonomic motion of the “Holly” design. The motor pair difference would make precise control over curved movements and spins during straight line motion very difficult. Additionally, the command latency is higher for the motors pair controlled via a multiplexor or RCX. The increased latency of the NXT-RCX communications (see ) makes design #3 far inferior to design #5.Only designs #2, #4 and #6 allow full holonomic movement, because of the uniform wheel control due to the same type of motors. Design #6 has a multiplexor with an associated command latency resulting in non-uniform wheel control. Moreover, fitting 4 NXT motors takes too much space.Only design #2 and #4 would achieve the aimed full holonomic motion. The RCX was chosen over a motor multiplexor (#2 vs. #4) since the former allows for up to 5 collision sensors while the later allows for only 3. More collision sensors were considered valuable on a robot whose footprint was not rectangular – more sensors are needed to cover the many faces.Although we built a 4-wheel prototype similar to, but a lot better than last year's winner 'Holly' (design #3 used two pairs of motors of different types, arranged in a symmetrical manner to provide good grip and stability (Figure ), it was 1 cm less in length and width than 'Holly', had two kickers (Figure ) and better (than Holly's) gear ratio on RCX motor pair (Figure )), we were willing and felt confident to try and achieve something innovative so we built 3-wheel holonomic robot. The previously described problems don’t exist

when using such a design. Fewer motors of the same type would still allow full holonomic movement , but at the cost reduced speed - each wheel is oriented at 60° to the others meaning that the robot moves at 2/3 of its maximum speed (Figure ). The NXT block will handle movement and a multiplexor / RCX – the kicker. So we finally chose to build and use design #2.We chose to try and use the RCX for kicker control (using two RCX motors) mainly because we could use additional sensors and still be able to connect motors. Direct motor-wheel connections were preferred as this would minimise wheel wobbling and increase accuracy of movements such as spins at exact angles or precise approaches to the ball without hitting it. An equilateral triangular base was constructed to attach the wheels at the needed angle (Figure ). To mount the rest of the components, a rectangular frame had to be put on top of the triangular one (Figure ). Many changes were made to improve the performance of the kicker's power, overall robot stability and to fit within size constraints (Figure and Figure ).During the first build attempt, the small distance between the wheels and the base's centre, along with the high centre of mass, made the robot unstable. It tipped over on sudden stops. Attaching the rectangular frame on top of the triangular base was hard due to the nature of the LEGO parts. So mounting the NXT, RCX, and the kicker (with its motors) was difficult. We managed to directly connect the motors to the wheels. This first design (Figure ) was not optimal as the centre of mass was still high, the wheels were exposed. So the frame was further lowered by flipping the motors (Figure). The wheels were brought closer and an outer cage, connected to the inner rectangular frame, was built. We further lowered the centre of mass by putting the control blocks lying on the rectangular base instead of standing on their sides as before. After a few attempts, a continuous action kicker was built (Figure and Figure ). Sadly, it was not powerful enough, so it was changed to a stop-start action kicker by rearranging the motors and putting additional levers (Figure and Figure ). Most of the time, we were able to score from every point of the pitch.DEVELOPMENT & TESTING

Incremental improvements were developed by two team members working on the robot alternately. While being built, the robot and kicker were constantly tested and their weight, durability, friction were assessed and compared to our aims. If goals were not met, the components were rebuilt or

Page 3 of 36

Page 4:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

changed.ASSESSMENT

We aimed and achieved building a very robust robot – it is compact and stable while still fitting in size constraints. Its volume is small due to the shape of its frame. The control blocks are easily accessible. The main problem is kicker power. It is close to our original aims, but does not meet them which is due to the low power of the RCX motors.MOVEMENT & CONTROL

GOALS & ACHIEVEMENTS

Omni direction movement without orientation change

Fully

Omni direction movement and spin FullyCurved movement with changing orientation prototype

Fully

Curved movement implementation Partially

DESIGN

Given that Java and Python were chosen as the programming languages for the control and vision aspects for the project respectively the Java API LeJOS seemed to be the most natural choice for NXT control. Although another alternative LEGO programming language was considered, due to the unfamiliarity of the programming language and efficiency concerns the idea was dropped. Due to the dynamic nature of the environment inorder to reduce the time of transferring data from computer to NXT block, the basic functionalities of the robot which include move,spin,stop and kick were programmed on the NXT block. These functionalities were then executed by issuing commands from java LeJOS to the NXT via bluetooth. To enable the robot to move in a specified angle, a vector based algorithm was developed. On passing the required angle for movement the algorithm computed three different speeds for the three wheels. For further details refer to appendix E. Since one of the main tasks of the robot was to score a goal, it was necessary for the robot to orient its kicker to face the ball. This was achieved by a very simple move and spin action. With the view of improving this action, a prototype-curved motion was developed which offered limited spin-while-moving capability. For further details refer to appendix D.There were also plans to use the above prototype to correct the orientation drift or intentional spinning of the robot while moving along a straight line, as this could be achieved by using many tiny

sequential curved moves in conjunction with feedback from the vision system.DEVELOPMENT & TESTING

On repeated testing it was found that the power of the robot was heavily dependent on its battery level. As mentioned above in order to enable the robot to move in a specified direction, different speeds were obtained for the three wheels. If the battery level was insufficient to power the motors to attain the required speed it was found that the movement of the robot was not ideal.ASSESSMENT

Due to the time constraints it was not possible to complete the experimentation and the integration of the spin while moving feature. This resulted in using the basic spin and move action.There were also plans to resolve the above-mentioned issue with power and battery level by ensuring that the motors of the robot were powered proportional to the current battery level. However due other more urgent work this idea was abandoned.COMMUNICATIONS & CONCURRENCY

GOALS & ACHIEVEMENTS

Light-based NXT-RCX channel FullyCommon messaging & logging protocol FullyReliable & concurrent message receiving, processing & sending

Partially

Easy to use communications, logging & concurrency framework

Partially

DESIGN

The two light sensors are positioned facing one another with a small gap between and encased in cardboard to eliminate light pollution.

Type Length MeaningOn Short Bit zeroOn Long Bit oneOff Short Next bitOff Long Next integer To support the messaging protocol, the light protocol is designed for the exchange of arbitrary binary data. A long pulse is twice the length of a short pulse. A full specification of the light protocol can be found in appendix F.Given the limitations of the light protocol, the messaging protocol is simple: a sequence of unsigned integers. The first identifies the message type, the second the number of arguments and each argument consists of exactly further integer. A full specification of the messaging protocol can be

Page 4 of 36

Page 5:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

found in appendix G.The logging framework is built on top of the messaging protocol. Every log message has a source, severity, type and may have additional arguments. A full description of the logging framework can be found in appendix H.With the exception of the RCX, every communications endpoint includes input and output queues with asynchronous receivers, processors and senders. The RCX has too little memory to support queues. An asynchronous watchdog responsible for keeping the channel connected is present in all endpoints. A full description of the communications framework can be found in appendix I.The concurrency framework abstracts away the work of threading, starting, stopping, and publisher/subscriber notifications. This framework is used by all threaded components in the system including the main control agent and especially in the communications framework. A full description of the concurrency framework can be found in appendix J.DEVELOPMENT & TESTING

The frameworks were developed iteratively - starting simple and adding features as they became required. For example, early implementations of the communications framework didn’t support input & output queues; these were added when it became clear that delays in message reception, processing and sending were causing problems.Code reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use.RCX code was tested in the LeJOS RCX simulator before being run on the real RCX. Full details of how the RCX code was tested can be found in appendix K.Testing code on the NXT is harder than testing code on the PC and testing code on the RCX harder still. For this reason, code common to more than one platform was tested first on the PC, then the NXT and finally on the RCX whenever possible.ASSESSMENT

The latency of the NXT-RCX communications channel caused significant difficulties. It takes ~450ms for a “kick” message to arrive at the RCX from the PC, in which time a moving ball and/or robot are unlikely to be in position for an effective kick.

The latency of the light protocol could be reduced significantly by switching from an arbitrary data approach to a task specific approach. For example, in the case of NXT RCX messages, neither “kick” nor “reset” need arguments so short pulses could be repurposed to mean “kick” and long pulses to mean “reset”. The resulting channel would no longer support the common messaging protocol but it would reduce the latency of kick messages to ~100ms.Alternatively, using motor and sensor multiplexors instead of the RCX would eliminate the latency and be simpler to develop but would still allow the same motor and touch sensor configuration.The third aim was not fully achieved because all major components exhibited intermittent and rare “freezing” issues. Although the cause was never tracked down, it is assumed such problems originated in the communications or concurrency frameworks. Unit testing did not identify the cause because the problems never occurred when sending individual messages or when sending multiple messages under more controlled conditions. In hindsight, systematic testing of concurrent operations would have been of value.The fourth aim was not fully achieved because the communications, logging & concurrency frameworks never reached a point where they could be considered easy to use. Since the frameworks were rarely fully working, the priority of making them easy to use was always less than fixing the bugs. However, this made it difficult for others to get involved in fixing the bugs.

VISION

GOALS & ACHIEVEMENTS

OpenCV frame problem FullyVelocity estimation & smoothing FullyReliable & fast communication Fully'Robot hides ball' problem PartiallyBarrel distortion correction NoneAutomated thresholding None

DESIGN

Our vision system is built in Python using OpenCV (App. M.1.1). It has a vision processor (to obtain world data) and a communication server (to send that data to the Control system) running as separate processes. The system is designed to be configurable through command line parameters and a settings file (App. M.3). It produces constant command line location feedback. A simple GUI displays the coloured camera feed and draws

Page 5 of 36

Page 6:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

circles, lines and points to identify the different elements in the game. The GUI allows manual threshold adjustment. Custom logging and debug functionalities are added. A test client is built for automated testing. Frame rate is now constantly 23fps (App. M.6) or above and is limited by the camera capabilities. The system is designed to work both on Windows and on Unix-like operating systems, to allow development on all platforms we use.To determine location and orientation, three methods were researched out of witch two were implemented and used. The first and last rely on thresholding in the HSV colour model (App. M.1.2).To calculate the needed data, the first isolates two points from the robot - the centroid of the T and a white block put to the right of the plate (App. M.4.5). Frame rate was ~18fps when using it.Because of the first method's inefficiency, research how to use a classifier was done. The plan was to use edge detection and the OpenCV's integrated AdaBoost classifier and train it to recognise the robots and the ball. Afterwards, simple thresholding was going to be done to identify which robot is ours. However, this was not achieved, as a way to produce the training set could not be found. Also, Milestone 3 was coming fast and research suggested that this was a time consuming and hard task to achieve, so a decision was taken to abandon this idea.Various papers stated that using normalized central moments is very efficient computationally. So in a desperate and successful attempt to improve accuracy and performance of the first method, a better one using image moments was developed and is now used (App. M.4.6, M.5.4, M.5.5). First, it uses the zero and first-order normalized central moments to obtain the coordinates of the centres of the robots (App. M.8.2). Then, by using first and second-order moments, the angle between the principal axis of the “T” and the X-axis of the pitch is obtained (Error: Reference source not found). Finally, the third-order moments are used to determine the skewness so that the orientation of the “T” can be found (App. M.8.1). The skewness shows on which end of the principal axis the head of the “T” lies.Sadly, this new method worked ~ 83% of the time for the orientation (App. M.4.6), so a correction was written (App. M.5.5). The final version is 100% accurate and is not affected by noise (App. M.4.6, M.5.5) if the thresholded object is the biggest contour in its binary image.

Accurate and useful prediction data is obtained using simple calculation of velocities. To achieve this, the coordinates of the objects from the last 20 iterations of the vision system's loop, together with the time at which they were taken, are stored in caches (three – for the ball and the two robots). Then values from the latest and a previous iteration are selected. The velocity of an object is calculated by dividing the sum of the distances by the sum of the times this object travelled during the iterations with the formula V = Δd/Δt (App. M.8.3).Smoothing is added to avoid the flickering effect. It is naive, but highly efficient. It works by replacing the centroid of an object with the weighted average of the previous three centroids, assigning the biggest weights to the most recently obtained centroids.A script that solved the OpenCV frame problem was written (App. M.8.4) and brightness control was added for camera light calibration (App. M.3). Barrel distortion correction was implemented for Milestone 2, but was later removed, because the difference in size and aspect ratio of the old and new (better-quality) frames required calculation of a new calibration matrix which was not done. Despite this correction is not applied, the pixels of the pitch in the image are still distributed closer to the real points than prior the frame problem was fixed, as the aspect ratio of the new images is now correct (~2:1).At first, the communication server was simple and used a string-based request-driven message protocol. A later improvement gave the opportunity to standardise the protocol throughout the whole system (App. L). So now the server handles integer-based messages, one-directional traffic during streaming, and supports message queues to increase reliability. This ensures that important messages are delivered even if connection is lost and regained. The server is also able to send data to more than one client simultaneously with no latency (App. M.6) and can operate over the network, not being limited to the local machine.DEVELOPMENT & TESTING

The vision system's development started by building its structure (App. M.2) and algorithms separately, latest versions were merged and maintained by one person. When SVN started to be widely used, development continued normally – everybody used the latest repository code to develop, test and add functionality.Once the structure was functioning, a test client was built to allow automated testing of the communication. It helped in discovering and fixing

Page 6 of 36

Page 7:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

bugs during the server restructure. Unit testing was mostly done manually in the Python's interactive interpreter. Additionally, the system's GUI allowed visual correctness testing by plotting points, lines and vectors that represent the state of variables containing information about objects in the game.Testing of vision algorithms was done by trying to simulate all types of conditions in an attempt to break the system or find faults – objects were put in the shadows, in normal conditions during day and night, and under lamps to simulate extreme light conditions and find universal thresholding values. Sample results of such tests - (Error: Reference source not found).ASSESSMENT

The vision system is basic - pitch and goal coordinates are hard coded, thresholding values are manually adjusted when needed. However, it used advanced methods for orientation detection which made it more than fast enough, accurate and very reliable.AGENT & STRATEGIES

GOALS & ACHIEVEMENTS

Develop an agent, suitable for a football game

Fully

Develop basic strategies to be able to compete in a football game

Fully

Implement additional strategies PartiallyImplement curved motion in strategies None

DESIGN

During the initial phase of the project the team decided to use a complex agent based on BDI (Beliefs, Desires & Intentions) architecture. This architecture was altered to use utilities instead of logic, however, the overall structure of the BDI control loop was preserved.Transition from BDI to utilities was necessary, because of difficulties, caused by committing to a single strategy. In a fast-paced game such as football the actions issued to the robot have to be constantly re-evaluated and the optimal plan often changes rapidly. Therefore, commitment to a plan is inefficient – a movement by the opponent might create a better solution, while the current solution remains achievable. In such situation the old agent continues executing the old, still achievable plan, instead of selecting the better solution. Contrary, the new agent constantly goes through all strategies, calculates their utilities, and picks the most suitable one, as shown in the pseudo-code below.

for each frame of vision data:for each strategy:

calculate utilityif max utility < current utility:max utility = current utilitybest strategy = current strategy

end ifend forexecute best strategy

end for

DEVELOPMENT & TESTING

The new agent allowed one additional modification – discarding the idea of planning ahead and instead just issuing a single action. The old agent executed a plan of consequent actions created by a strategy as long as the strategy remained feasible. The new agent re-evaluates all strategies on every iteration and returns the best one, which is either the same as the one selected previously or it is a different one that is more suitable. In either case it provides a more accurate plan, which overwrites the old one. Therefore, only the first action in each plan is executed. The rest of the plan is unused and adds to the amount of computation required. For that reason each strategy returns a single action rather than a plan. A strategy could still consist of a number of actions, but it returns the one, most suitable for the situation. If a strategy is continuously selected it eventually achieves its aim. If at some point another strategy is selected, then the first one is no longer optimal and there is no point of achieving its aim.The decision to change the agent was taken after its performance was compared to the one of the old agent. Removing the need of a plan and instead sending a single command proved to be beneficial, but there were concerns about increasing the amount of computations required on each iteration. Since each utility depends on a number of factors, constantly re-calculating the utilities of all strategies might reduce performance. However, the team managed to find a solution to make this reduction of performance insignificant, which is discussed in appendix N.Throughout the project many good strategies were proposed and considered. However, the team spend most of their efforts perfecting and testing those that are essential, instead of adding new ones without improving the ones already written. The most essential strategies include handling penalties, approaching the ball, avoiding objects, shooting to score a goal, as well as blocking the opponent. Most of the strategies were initially written in a simplified manner, not taking into account collision or walls. A great amount of time

Page 7 of 36

Page 8:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

was spend on developing, improving and implementing path finding techniques, that provide adequate solutions in extreme situations (e.g. when the ball is near a wall or when it is close to the opponent). A full list and description of the strategies used and explanation of the path finding techniques can be seen in Appendices O and P.ASSESSMENT

In the time and work spent on agent and strategies priority was given to getting the basic behaviours working and assuring accurate behaviour in extreme cases. As a result, the robot adequately competes in a football game - it demonstrates strong defensive and offensive skills, even though the delay in the kick command prevents it from scoring more goals. Handling of extreme cases, such as the ball being near a wall or in a corner, is not perfect, but the robot manages to act adequately in most of those cases. This part of the project allows for many future improvements – perfecting the basic behaviours, implementing complex movement techniques into the current strategies, as well as developing new strategies. Decelerating on approach to the ball, accelerating while dribbling, and even rotating to change the direction of an incoming ball could also improve performance. Rotating on the move was also discussed and it could be useful in dribbling with the ball, when avoiding opponents. Finally, developing strategies, which are more effective against certain opponents will allow for countermeasures to be taken against the opponent's strengths and its weaknesses to be exploited more effectively.COLLISION AVOIDANCE & HANDLING

GOALS & ACHIEVEMENTS

Add sensors to handle collision FullyUse vision to avoid collisions PartiallyInvestigate and integrate ultrasonic sensor

Partially

Collision behaviour which moves out of danger

Fully

Maximise collision detection surface area

Fully

DESIGN

The team aimed to maximize the total collision surface area. (Table ) In the first design cycle, we added two whisker sensors, which covered a minimal amount of surface area. By the last cycle, the surface area improved to four fronts which could detect collisions with curtains. (Figure 28)

Once the collision sensors were mounted, the robot had to react to the input. Initially, when a sensor was activated, it reacted by moving in the opposite direction. The reactive behaviour proved inadequate, so the design was augmented with smart route planning for proactive behaviour. Ultimately, we balanced the system to achieve reactivity and smart route planning.DEVELOPMENT & TESTING

In the first development cycle, we had two whisker sensors and a reactive behaviour. Testing revealed the whiskers were insensitive to collisions, and the behaviour wasn’t sophisticated – it simply backed away. So, we changed the collision sensor surfaces to curtains and the behaviour to use vision to avoid all objects except the ball. More testing revealed the robot would not approach the ball if the other robot was in possession. The final iteration removed the proactive collision avoidance, but our robot would collide when the opponent possessed the ball. ASSESSMENT

The system worked for simple collisions. However, the agent architecture mutually excluded both avoiding obstacles and obtaining the ball without colliding. The best balance was achieved.SIMULATOR

GOALS & ACHIEVEMENTS

Ability to test strategies in as realistic an environment as possible

Partially

Simple approach using off-the-shelf components where possible

Fully

Ability to “script” test scenarios that could be easily re-executed

None

Ability to use mouse to position and apply momentum to objects

None

DESIGN

The simulator is implemented in Java enabling reuse of the communications, concurrency and logging frameworks. It uses basic AWT graphics.Two rigid body physics simulators were evaluated: JBox2D and Phys2D. Phys2D was selected on the basis of our own experimentation and an online review showing that Phys2D is easier to develop with and well capable of handling the small number of objects we required.

FIGURE : SIMULATOR SCREENSHOT

To achieve the goal of testing the strategies as realistically as possible we aimed to run as much

Page 8 of 36

Page 9:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

“real” code inside the simulator as possible. Two approaches were considered: Run the “players” in separate processes with

alternative communication channel end points. Run everything in the same process by using

alternative implementations of Java interfaces.The first approach was taken because it utilised a well-established interface & protocol and avoided accidentally leaking information from one domain into the other (e.g. via static state and singletons).DEVELOPMENT & TESTING

During early stages, objects regularly passed right through obstacles. A difference between, and misunderstanding of, the Swing and Phys2D coordinate systems was eventually found to be causing this problem. The position of Swing objects is the top-left of the bounding box while Phys2D uses the centre of mass.Roughly 20% of the development time was spent tuning Phs2D parameters (friction, damping, etc.)ASSESSMENT

Although the simulator was identified as a necessary component by the team in early meetings, it was routinely deprioritised up until after milestone 3. It was not in a usable state until after milestone 4. This was too late for the simulator to fully achieve its primary purpose of supporting strategies testing.Some value was obtained from the simulator by those testing strategies in the final stages of the project. It also had the side-effect of helping to test the communications, concurrency and logging frameworks more thoroughly.CONCLUSION

The group 2 robot boasts an original holonomic 3-wheel design and robust, “studless”, construction. A clear and consistent solution architecture and messaging protocol allowed the work to be spread effectively across the team. The system performed well in all milestone demonstrations and in the final day tournament.In hindsight, using motor and sensor multiplexors instead of the RCX may have yielded a more reliable and easier to develop solution. There is also much potential for improving the robot’s movement, in particular spinning while moving.Hat-Trick have worked through many technical and team problems to produce a competitive Automated Lego Footballer. The system has potential for significant improvements which, it is hoped, future SDP groups will take on.

Page 9 of 36

Page 10:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

APPENDIX A: REFERENCES

1. OpenCV Wiki. [Online] http://opencv.willowgarage.com/wiki/.2. Dynamically-Stable Mobile Robots in Human Environments. Microdynamic Systems Laboratory. [Online] http://www.msl.ri.cmu.edu/projects/ballbot/.3. attractivejeremy. YouTube. Holonomic Robot Demonstration. [Online] http://www.youtube.com/watch?v=B6oBc8J-_bo.4. Boosting. OpenCV. [Online] http://opencv.willowgarage.com/documentation/cpp/boosting.html.5. A Survey of Moment-Based Techniques for Unoccluded Object Representation and Recognition. Reeves, Richard J. Prokop and

Anthony P. 6. Phys2D. [Online] http://www.cokeandcode.com/phys2d/.7. JBox2D. [Online] http://www.jbox2d.org/.8. Ciardhubh. A comparison and benchmark of Phys2D and JBox2D. /var/www/it. [Online] http://ciardhubh.de/node/15.9. Robust Color Choice for Small-Size League RoboCup Competition. Qiang Zhou, Limin Ma, David Chelberg, David Parrott. SYSTEMICS, CYBERNETICS AND INFORMATICS.10. Utkarsh. Color spaces. [Online] http://www.aishack.in/2010/01/color-spaces-2/.APPENDIX B: SOLUTION ARCHITECTURE DIAGRAM

APPENDIX C: ROBOT DESIGN

# Control method Max motors Locomotion Kicker Holonomic?1 NXT 3 NXT 2 NXT + castors 1 NXT No2 NXT + RCX 3 NXT + 3 RCX 3 NXT 2 RCX Yes3 NXT + RCX 3 NXT + 3 RCX 2 NXT + 2 RCX 1 NXT Yes (limited)4 NXT + RCX motor multiplexor 3 NXT + 4 RCX 3 NXT 2 RCX Yes

Page 10 of 36

Page 11:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

5 NXT + RCX motor multiplexor 3 NXT + 4 RCX 2 NXT + 2 RCX 1 NXT Yes (limited)6 NXT + NXT motor multiplexor 5 NXT 4 NXT 1 NXT Yes

TABLE : HIGH-LEVEL ROBOT DESIGNS CONSIDERED

FIGURE : DISTANCES D1=D2=D3=D4 (SYMMETRY), FINAL FRAME FDESIGN OF THE 4-WHEEL HOLONOMIC ROBOT

FIGURE : ATTEMPTS TO FIT RCX MOTOR-WHEEL PAIRS (PLEASE SEE COMMENTS IN SUBSECTION 1.1)

FIGURE : GEAR RATIO

FIGURE : MAIN AND SECONDARY KICKERS

FIGURE : 3-WHEEL DESING'S MAIN ARRANGEMENT & ANGLES

FIGURE : CONTINUOUS ACTION KICKER (PLEASE SEE COMMENTS IN SUBSECTION 1.1)

FIGURE : ATTEMPTS TO OPTIMISE THE CONTINUOUS KICKER (PLEASE SEE COMMENTS IN SUBSECTION 1.1)

FIGURE : START-STOP ACTION KICKER (PLEASE SEE COMMENTS)

FIGURE : TRIANGULAR FRAME

FIGURE : START-STOP ACTION KICKER – DIFFERENT

FIGURE : FIRST VERSION OF A FULL 3-WHEEL DESIGN (PLEASE SEE COMMENTS IN SUBSECTION 1.1)

1.1. COMMENTS

FIGURE 2

We attempted to mount the RCX motors in front the NXT motors many times. We varied the distance D and height H to try and firmly attach the motor to the frame at positions P1, P2 and P3. Most of the times the mount was impossible because of the gears. Moreover, if we succeeded in attaching the RCX motors to the frame, they could not fit together with the NXT ones, so only one

pair could stay mounted on the frame. None of these attempts succeeded, because not matter how the two pairs were attached, at least two wheels always broke the size constraints. We figured out that to save as much space as possible and fit into the constraints, the RCX motors needed to stay inside the NXT pair. However, getting the wheels closer left little space for the actual frame that had to hold them. So we built the frame around the motors, attached the outer frame to it, and reinforced it in the middle(below and above the motors). To equalise the distances between the wheels and the frame centre point, the wheels of the NXT pair had to be put directly to the rotors without anything in between. In the end, the motors became and actual part of the frame.FIGURE 6 & FIGURE 7

The kicker was built by Daniel. He varied the length of the lever from design A to design B (Figure 7.) to try and improve it, but in the end it was week and could only score from the centre of the pitch.FIGURE 8 & FIGURE 9

To build this kicker, we aimed to get the optimal lengths for the levers to achieve maximum power. We tried a lot of combinations by varying distances D1, D2 and D3 (Figure 9.) and physically testing all of them on the pitch.Figure 8. shows the final kicker design. It had to use RCX motors, because the NXT ones were used for movement. There was also not enough space to fit a NXT motor at the top, because of the small surface of the platform, height limits and size of the NXT and RCX blocks. Because the RCX motors are weaker, we used two.APPENDIX D: CURVED MOVEMENT

The wheel powered base consists of three wheels, positioned at an angle αi , relative to the local frame [ Xl, Yl] . The centre of this frame coincides with the centre of gravity of the base and wheel 1 is located on the local axis(xl ), in other words: α1= 0, The orientation of the base with respect to the global frame ( X, Y) is given by the global coordinates [x, y,ψ ]. The relation between the global velocity of the platform (,,) and translational velocity Vi of wheel hub i can be obtained using the inverse kinematic equation of each wheel hub. R refers to the length from the centre of gravity of the robot to the centre of each

Page 11 of 36

Page 12:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

wheel. If r is the radius of each wheel then the translational velocity of the hub can be rewritten as an angular velocity φi of the wheels as Vi = r φi

As the robot is highly symmetrical, the inverse Jacobian matrix is easily calculated. Given the linear velocities ,and angular velocity in the global reference frame, the linear velocities of each wheel can then be obtained:

In the above diagram,If = , vector joining the centre and the ballΦ = -------------------[ angle of the ball with respect to X axis]If = , vector joining the kicker and the centreΘ = ------------------------[ angle of the local x axis with respect to X axis ]Angle of rotation is angle between line joining centre of the robot to kicker and the line joining centre to ball which can be calculated as ψ= 180 – θ +φ

For my robot at this position below, r =3.2 cm ,R=6.5 , , θ =0 degrees as local axis co inside with global axis.In the above diagram the ball is positioned at an angle of 45 degrees from the X axis. The kicker is on x axis. There are three wheels W1, W2 and W3 which are equi distant R from O. The task is that the robot has to move towards the ball so that the kicker has to hit the ball. Hence the centre of the robot has to undergo translational velocity and angular velocity . Centre has to turn about 5π/4 radians to make kicker to kick the ball.

is proportional to 5x = deg/secThe time taken will be: = 305π/180 rad/sec = 3.60secondsAPPENDIX E: MOVEMENT IN ANY DIRECTION

The wheel powered base consists of three wheels, positioned as shown in the figure relative to the frame [X, Y]. The centre of this frame coincides with the centre of gravity of the base and wheel 3 is located on the X axis. In the above diagram the ball is positioned at an angle of 45 degrees from the X axis. There are three wheels W1, W2 and W3 which are equi distant R from O and the centre of the frame is positioned at O. The task is that the robot has to

move towards the ball. Hence the centre of the robot has to undergo translational velocity at angle of 45º from X axis. Hence,=As the angle between any two wheels is 120º, the angles made by W1, W3 and W2 with X axis are 60º, 180º and 240º respectively. Hence position vectors of these wheels with respect to origin O are given by

The speed of each wheel is calculated as the dot product of velocity vector and the position vector of each wheel. Therefore,

which is proportional to 0.965925826 or 96.6 which is proportional to – 0.258819045 or -25.9

which is proportional to -0.707106781 or -70.7APPENDIX F: LIGHT PROTOCOL

The NXT and, to a lesser extent, the RCX are unable to reliable distinguish between the four light states:

o No lights ono Only my light ono Only other light ono Both lights on

Being unable to distinguish between all four light states means that the connection can only be half-duplex – it is not possible for both RCX and NXT to be sending messages at the same time.

The range of the light sensor readings is not obvious. The NXT appears to exhibit ordered values – the higher the value, the lower the detected light intensity. The RCX on the other hand exhibits unordered values: the value is lowest when the other light is on and highest when its own light is on with “none” and “both” somewhere in between!

NXT RCXNone 843 535Self 519 765Other 250 402Both 255 497

TABLE : EXAMPLE LIGHT SENSOR READINGS

The light range used to detect the RCX light by the NXT is 284±50 (configured in melmac.masterblock.comms.RcxLightReceiver).

The light range used to detect the NXT light by the RCX is 398±50 (configured in

Page 12 of 36

Page 13:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

melmac.slaveblock.comms.NxtLightReceiver).

Battery power levels can affect the readings. Environmental light pollution can have a very

large effect. This is minimised by the cardboard shielding but is not eliminated entirely.

Under some circumstances the light detection ranges may need to be altered.

The RCX is “active”, that is, after start-up it assumes the NXT is watching and so it is free to send messages immediately.

The NXT is “passive” in that it won’t send any messages until after it has first seen a message from the RCX.

The active/passive distinction is needed because with a half-duplex channel, implementing a handshaking protocol was found to be too time consuming.

A “receiving” flag is set when a light is detected preventing messages being sent at the same time. However, it’s still possible for both lights to be on at the same time accidentally. When this happens, either the “both lights” state is detected or, more likely, a message is corrupted. In either case, an error is logged and the channel disconnected (they return to their initial start-up active/passive states).

Short pulses of light have duration of 50ms. Long pulses have duration of 100ms. Shorter pulse durations were attempted but 50ms was the shortest duration at which the connection remained reliable.

Pulse duration is determined by the mid-point between short and long (i.e. <75ms = short and >=75ms = long).

The duration of a light-on pulse indicates a binary value: short = zero, long = one.

The duration of the gaps between binary values have different meanings: short = next bit, long = next value.

The receiver is triggered on each transition “on to off” or “off to on” – there is no timer firing after a set period of time to check the light state.

The duration of an “event” (light-on or light-off pulse) is determined by the time between the current event and the last event.

All values transmitted are assumed to be unsigned integers encoded into no more than 31 bits (so they can be stored in a Java signed integer).

Only the significant bits are transmitted. The end of the message is detected implicitly –

when all argument values have been received.

If the message has no arguments then the end of message occurs after receiving the zero for the number of arguments value.

Example: to send the message { message type = 3, argument 1 = 1, argument 2 = 0 } would require the following pulse sequence:

1. On long (bit 1)2. Off short (next bit)3. On long (bit 1)4. Off long (next value, message type = 3)5. On short (bit 0)6. Off short (next bit)7. On long (bit 1)8. Off long (next value, number args. = 2)9. On long (bit 1)10. Off long (next value, arg 1 = 1)11. On short (bit 0)12. Off (end of message since 2 args

received, arg 2 = 0) With 7 long events and 4 short events, the

example message above would require ~900ms to transmit which demonstrates the very high latency and low bandwidth of this connection channel.

APPENDIX G: MESSAGING PROTOCOL

All communications channels utilise this protocol including the socket connection between Java and Python, the Bluetooth connection between Java and the NXT and the light-based channel between the NXT and the RCX.

All messages are comprised of two or more unsigned integers.

The first integer identifies the message type. The second integer identifies the number of

arguments contained within the message. If there are no arguments, this value will be zero and will also be the final value in the message.

Each argument is comprised of a single integer value and are sent in sequence. If the second value in the message is “2” then exactly two argument values MUST be sent.

Sending more or fewer argument values than that indicated in the second message value will result in a corrupted message.

Message type values have different meanings on different channels. For example, on the Java NXT channel message type 2 means “spin” but on the Java Python channel message type 2 means “start frames”.

All message types can be found in melmac.core.comms.MessageType which is a pseudo-enum. A real enum could not

Page 13 of 36

Page 14:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

be used because the RCX does not support them.

The MessageType class also contains constants identifying the total number of message types for each channel which are used to create some fixed size arrays in code used on the RCX.

In all but one case, message type 0 indicates “acknowledgement”. However, on the NXT RCX channel, this has been swapped with “kick” to shave 50ms off the time needed to send a “kick” message to the RCX.

Acknowledgement message types are special. Messages of this type are handled differently and are used to indicate a previous command has been completed or successfully received.

Some messages are defined as synchronous while others are defined as asynchronous. Synchronous messages MUST have an acknowledgement sent in reply while asynchronous messages MUST NOT have an acknowledgement sent in reply.

Channel Message ModeJava NXT Move AsyncJava NXT Spin SyncJava NXT Stop SyncJava NXT Kick SyncJava NXT Curved move SyncJava NXT Reset SyncNXT Java Collision AsyncNXT Java Log AsyncJava Python Pitch request AsyncJava Python Start frames AsyncJava Python Stop frames SyncJava Python Reset SyncPython Java Pitch info AsyncPython Java Frame info AsyncPython Java Log AsyncNXT RCX Kick SyncNXT RCX Reset SyncRCX NXT Collision AsyncRCX NXT Log Async

TABLE : MESSAGE TYPES

The “move” message actually means “start moving” which is why it is asynchronous – no acknowledgement is sent when the move is complete because there is no end condition sent to the NXT. To stop moving a “stop” message must be sent by the control agent at some point in the future.

“Spin” & “kick” are synchronous because they both involve turning motors on for a fixed period of time and then turning them off

automatically. All of this work is done on the NXT or RCX – there is no need to send a “stop” message to end a spin or kick command. As a result, neither of these commands can be interrupted.

Only some messages use arguments.Message ArgumentsMove Angle (degrees)

Power (0 - ~160)Spin Angle (degrees)

Power (0 - ~160)Curved move Direction angle (degrees)

X-distance (pixels)Y-distance (pixels)Spin angle (degrees)Power (0 - ~160)

Collision Sensor ID (0 - 4)Log Source

SeverityLog message IDLog message arguments(see below for info on these)

Pitch info Width (pixels)Height (pixels)Left goal top (pixels)Left goal bottom (pixels)Right goal top (pixels)Right goal bottom (pixels)

Frame info Ball X (pixels)Ball Y (pixels)Blue X (pixels)Blue Y (pixels)Yellow X (pixels)Yellow Y (pixels)Blue direction X (pixels)Blue direction Y (pixels)Yellow direction X (pixels)Yellow direction Y (pixels)Ball values trusted (pixels)Blue values trusted (pixels)Yellow values trusted (pixels)Ball velocity X (pixels)Ball velocity Y (pixels)Blue velocity X (pixels)Blue velocity Y (pixels)Yellow velocity X (pixels)Yellow velocity Y (pixels)

TABLE : MESSAGE ARGUMENTS

Acknowledgement messages do not include any arguments. They are processed in strict FIFO order so it is always possible to know which message an acknowledgement refers to.

Page 14 of 36

Page 15:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

APPENDIX H: LOGGING FRAMEWORK

A custom logging framework was needed because the standard logging mechanism available in Java are not available on the NXT and RCX.

We could not use an off the shelf logging framework because we had to stick to integers – no textual logging was possible due to limitations of the RCX.

All code relating to the logging framework can be found in melmac.core.logging.

All log messages need to be communicable over the common messaging protocol. For that reason only integers can be used.

Every log message consists of a source, severity, log message type, and arbitrary list of arguments. Each argument consists of a single integer. Note that “log message type” is independent of the “message type” used in the communications framework.

Source SeverityRcx = 0Nxt = 1Java = 2Python = 3UI = 4Simulator = 5

Debug = 0Info = 1Warning = 2Severe = 3Exception = 4

The pseudo-enums can be found in:o Sourceo Severityo LogMessage

Classes for providing textual descriptions for the pseudo-enums (only on the NXT and PC) can be found in:

o SourceExto SeverityExto LogMessageExt

The communications framework is an extensive user of the logging framework and there are many log message types common to each endpoint. For this reason, “bundles” of log message types are represented in LogMessages and their textual representations provided by LogMessgesExt.

A Logger (interface) is a class capable of accepting log requests. Some loggers are “local” while others are “remote”. Remote loggers use the communications framework to “broadcast” log messages, but they will always also provide the log message to a local fall-back logger.

LoggerBase is the canonical base

implementation of Logger. It allows subclasses to override a single log method instead of implementing all four by “hiding” the source value.

VoidLogger was sometimes useful in testing – it simply discards all log requests.

ConsoleLogger is used in the various PC components to send log messages to standard out.

CommunicationsLogger is a base class providing services for all “remote” loggers. It will send a log message over a given communications channel if possible, catching and ignoring any exceptions, and then send to a local fall-back logger. All subclasses can be found distributed around the solution since they are device/communications channel specific.

The ScreenLogger on the NXT is capable of displaying limited textual information but the LCD is still limited to 8 lines of 16 characters. A basic scrolling logging mechanism is implemented.

The ScreenLogger on the RCX is limited to displaying only numbers. Only the log message type is displayed. There is no delay after showing a number so one log message may be immediately overwritten by the next. Any delay (i.e. Thread.sleep) affects operations elsewhere.

An AsyncScreenLogger with a buffer for only a single message is available on the RCX. At one point it looked like writing to the RCX LCD screen from different threads was causing exceptions.

TriggeredScreenLogger was created when it looked like writing to the RCX LCD from any thread other than the main thread was causing exceptions. Neither class is used.

Some design decisions were constrained by the limitations of the LeJOS version of Java available on the NXT and, in particular, on the RCX. For example, enums could not be used in code that was to run on the RCX since the version of Java available on the RCX does not support enums.

APPENDIX I: COMMUNICATIONS FRAMEWORK

The communications framework is composed of 12 types of components:

o Communications base serviceso Message senderso Message receivers

Page 15 of 36

Page 16:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

o Message handlerso Message processorso Connection watchdogo Light communicationso Socket communicationso Bluetooth communicationso Data input stream communicationso Various common utilitieso Simulator communications

The majority of the code relating to the communications framework can be found in melmac.core.comms. However, some component specific elements can be found in the projects dedicated to those components. For example, RcxCommunications can be found in melmac.masterblock.comms.

The communications base services are responsible for abstracting the task of sending messages, coordinating the reception and processing of messages, and for maintaining the channel connection. They are comprised of the following interfaces and classes:

o Communications: an interface implemented by all providers of communication services.

o CommunicationsBase: the canonical base implementation of the Communications interface. Encapsulates a message sender, a message processor and a watchdog. Maintains flags indicating the current state with respect to connected, sending, receiving, connecting and disconnecting.

Message senders are responsible for sending messages over any communications channel.

o MessageSender: all message senders implement this interface.

o AsyncMessageSender: uses a fixed size buffer to send messages on a separate thread to the component asking for the message to be sent. Even so, requests to send a synchronous message are still blocked until the acknowledgement is received. The queue is forcefully cleared if it overflows. Message states are maintained according to the transition diagram in Error: Reference source not found. This is used everywhere except on the RCX.

o SyncMessageSender: Sends messages immediately and on the same

thread as the caller. Used only on the RCX.

Message receivers are responsible for receiving messages but this cannot be done in a generic way. As such details about all message receivers should be found within the section specialising in each type of channel.

Message handlers are registered with a communications service to be invoked when a message of a particular type is received. There is typically a separate message handler for each type of message. Message handlers typically contain very little code – they just extract the arguments and invoke the real code capable of dealing with the event. As an example of this pattern, take a look at MoveMessageHandler in melmac.masterblock.comms.handlers.

Message processors forward messages to the appropriate message handler based on the message type.

o MessageProcessor: all message processors implement this interface.

Page 16 of 36

Page 17:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

o AsyncMessageProcessor: uses a fixed size buffer to queue incoming messages before processing. Processing is then conducted in a FIFO fashion on a separate thread. Message states are maintained according to the transition diagram in Figure . This is used

everywhere except on the RCX.

FIGURE : ASYNC. MESSAGE PROCESSING STATES

o SyncMessageProcessor: Processes messages immediately and on the same thread as the caller. Used only on the RCX.

There is a possibility for each communication channel to fail. A Watchdog is used at each endpoint to monitor the channel and, if it fails, to attempt a reconnect. The watchdogs are also responsible for the initial connection attempt.

The most complex part of the communications framework is the light communications.

o LightCommunications: implements the connection protocol (active/passive) and the ability to send messages by turning the light on and off.

o ControllingLightCommunications: used on the NXT to “control” the RCX. The NXT sends “reset” messages to the RCX after a connection and before a planned disconnection.

o LightReceiver: implements the complex light reception protocol. Note that this class consists of little more than a single large method. Although it would be possible to break the method up, lots of small methods increases the code size overhead on the RCX. When literally every byte counts, it’s

sometimes necessary to use monolithic methods like this one.

o RcxCommunications: provides access to a RcxLightReceiver and concrete implementations of the turnLightOn and turnLightOff

methods for the RCX light sensor.o RcxLightReceiver: provides the

light-on boundary values for the RCX light sensor.

o NxtCommunications: provides access to a NxtLightReceiver and concrete implementations of the turnLightOn and turnLightOff merhods for the NXT light sensor.

o NxtLightReceiver: provides the light-on boundary values for the NXT light sensor.

A set of classes provide a means for creating communication channels over basic TCP sockets.

o ClientSocketCommunications: used to connect to a socket server such as the Python server or to the simulator.

o ServerSocketCommunications: used in the simulator to offer virtual vision and NXT connection endpoints.

o PythonCommunications: a concrete implementation of ClientSocketCommunications which sends a request for pitch information as soon as a connection is established.

Bluetooth communications are achieved via the following concrete communications classes:

o JavaCommunications: used on the NXT to offer a Bluetooth server.

o NxtCommunications: used in the Java application to act as a Bluetooth

Page 17 of 36

Page 18:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

client. Data input stream communications are layered

beneath either socket communications or Bluetooth communications.

o DataInputStreamCommunications: provides access to a DataInputStreamReceiver and manages the output stream component.

o ControllingDataInputStreamCommunications: used in the Java application to “control” the NXT and Python service. The Java application sends “reset” messages to the NXT and Python service after a connection and before a planned disconnection.

o DataInputStreamReceiver: manages the input stream component of the connection. Uses the readInt() method which blocks. Sadly this means that there may be undefined behaviour when the connection is asked to close while it is waiting for the next int.

o PcDataInputStreamReceiver: a thin wrapper class that disconnects when any non-timeout exception occurs.

o NxtDataInputStreamReceiver: a thin wrapper class that disconnects when any exception occurs.

Various common utilitieso InMessageStatus: an enum for the

message states used in AsyncMessageProcessor.

o MessageType: the list of all message types exchanged in all communication channels.

o OutMessageType: an enum for the message states used in AsyncMessageSender.

The mock connection endpoints are implemented in the simulator by the following classes:

o MockNxtCommunications: used by the Java application to connect to the mock NXT endpoint inside the simulator. A similar class is not required for the vision server connection since both are socket based.

o MockNxtJavaCommunications: provides a pretend NXT connection endpoint inside the simulator for the Java application to connect to. The simulator will create two instances of

this, one for each player.o MockPythonJavaCommunicatio

ns: provides a pretend Python connection endpoint inside the simulator for the Java application to connect to. The simulator will create two instances of this, one for each player.

Some design decisions were constrained by the limitations of the LeJOS version of Java available on the NXT and, in particular, on the RCX. For example, enums could not be used in code that was to run on the RCX since the version of Java available on the RCX does not support enums.

APPENDIX J: CONCURRENCY FRAMEWORK

All components that need to create additional threads use this framework to provide concurrency services.

All code relating to the concurrency framework can be found in melmac.core.threading.

To aid debugging, it is useful to name all threads. Unfortunately the RCX does not support named threads and the method for doing this on the NXT differs from the standard Java library being use on the PC. For that reason, it was necessary to create the following interfaces and classes:

o Runnable: identical to the standard java.lang.Runnable interface but replicated because the original is not provided out of the box on the RCX.

o ThreadImpl: encapsulates a Runnable instance inside a java.lang.Thread. Abstracts the differences between the PC and NXT versions of Java in naming threads.

o ThreadFactory: base class for creating threads for a provided Runnable instance, that may or may not be named, according to the Factory pattern.

o NamedThreadFactory: concrete implementation of ThreadFactory used on the PC and NXT for creating named threads.

o UnnamedThreadFactory: concrete implementation of ThreadFactory used on the RCX for creating unnamed threads.

Page 18 of 36

Page 19:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

The robot system includes many components that can all be started, stopped and have other features common to all “processes”. The following interfaces and classes abstract these common features:

o Process: any component that can be started, stopped and its running state queried implements this interface.

o ProcessBase: the canonical base implementation of the Process interface. Uses the Template Method pattern to allow deriving classes to react to start and stop events. Also provides access to a logger.

o AsyncProcess: a derivative of ProcessBase, every threaded component derives from this base class. Abstracts away the work involved in starting and stopping a thread and the associated synchronization requirements. Deriving classes override shouldWait() to tell the base class whether it should wait before running the next iteration and execute() to provide the implementation of a single iteration.

A simple publisher/subscriber system was implemented to enable components to signal one another when some state changed. For example, it is used by the vision service to notify the control agent that a new frame of vision data is available to be processed. The following interfaces and classes support this mechanism:

o Subscriber: anything that can be signalled implements this interface.

o Notifier: anything that maintains a list of subscribers implements this interface.

o BasicNotifier: the canonical base implementation of the Notifier interface using an ArrayList to maintain the list of subscribers.

The order of state checks and resulting behaviour inside the AsyncProcess.Runner.run() method were carefully honed over a lot of time. Changes to this code can have significant effects which are not always immediately obvious. Careful attention should be given to the comments in this class.

Some design decisions were constrained by the limitations of the LeJOS version of Java

available on the NXT and, in particular, on the RCX. For example, enums could not be used in code that was to run on the RCX since the version of Java available on the RCX does not support enums.

APPENDIX K: RCX TESTING

After changing RCX code, it should first be tested inside the LeJOS RCX simulator. This offers very limited error detection facilities as there is no way to interact with the system – it just generates random values on sensor ports.

The LeJOS RCX simulator does not operate the same memory constraints as the real RCX so “out of memory” issues can only be found by running the code for real.

A typical RCX test run involves:o Starting the program on the RCX.o Connecting the NXT and RCX light

communications.o Actuating a touch sensor and looking

for the resulting “collision” message.o Sending a “kick” command to the RCX.o Sending a “reset” command to the

RCX. When an exception occurs on the RCX it is not

possible to know what type of exception has been raised but the default assumption should be “out of memory”.

Adding LCD print statements throughout the RCX code are often the only way to track down the source of a problem. However, LCD printing is itself unreliable and can sometimes obfuscate the true problem.

To confirm a suspected “out of memory” issue on the RCX: (1) use print statements to determine which area of code is generating the exception, (2) comment out other chunks of code until the exception goes away, (3) if the exception does not go away it probably isn’t an “out of memory” issue, (4) use the linker output to see the code size range in which the exception occurs. Trial and error has shown the code must be under 6kB in size.

Light communications: set the base light pulse time very high (e.g. 1 second) and observe the light flashes using a Mark I Eyeball.

Ensure batteries are well charged in both NXT and RCX – the light intensity varies slightly with battery power and may drop out of the configured ranges.

Page 19 of 36

Page 20:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

APPENDIX L: COMMUNICATION PROTOCOL

An optimisation in the Vision–Control link reduced the amount of traffic between the two systems, by switching to one-directional data stream (subsection 1.4). This gave the opportunity to standardise the protocol throughout the system (so that as little code as possible could be used in as many places as possible) and the vision server was restructured. The whole system now works with integer-based messages and supports message queues to increase reliability in case connection is lost.1.1. MESSAGE TYPES

Control → Vision0 Acknowled

gment1 Request

pitch info2 Start

'objects info' stream

3 Stop 'objects info' stream

4 Request a vision system reset

Vision → Control0 Ackn

owledgment

1 Send pitch info

2 Send objects info

3 Log message

1.2. MESSAGE FORMAT

<message type><# arguments><arg1>…<argN>

A 'ptich info' message looks like this sequence:1,6,width,hight,tl_goal_Y,bl_goal_Y,tr_go

al_Y,br_goal_Y

An 'objects info' message looks like this sequence:1,19,bal_X,bal_Y,blu_X,blu_Y,ylw_X,ylw_Y,blu_dir_X,blu_dir_Y,ylw_dir_X,ylw_dir_Y,trust_bal,trust_blu,trust_ylw,bal_vel_X,bal_vel_Y,blu_vel_X,blu_vel_Y,ylw_vel_X,ylw_vel_Y

tl – top left Y – y coordinatebl – bottom left X – x coordinatetr – top right vel – velocity vectorbr – bottom right dir – direction vectorblu – blue robot ylw – yellow robotbal – red ball

In the beginning, the Vision system used 'trust_*' arguments to notify the Control system when there is suspicion that the arguments being sent contain unreliable data(e.g. When ball moved half of the pitch within 1/25 of a second). These are not used now, but have not been removed yet.1.3. FIRST PROTOCOL VERSION

Messages are comma-separated signed numbers sent as ASCII encoded strings. Each message ends with a newline character.

Vision is server, Control is client (Control connects to Vision).

Control always sends a request, Vision responds (request driven approach).

Size of a request or acknowledgement message (these messages don't have any arguments):

(2 digits + 1 comma + 1 newline) * 2 bytes = 8 bytes

Average size of a 'pitch info' message (this has 19 arguments):

Our system views the pitch as a 295x580 pixels matrix. So for simplicity, the average argument length is assumed to be 3 symbols (including signs – used in direction and velocity vector args).(3 digits + 19 args * 3 sym + 1 newline + 20 commas) * 2 bytes ~ 162 bytes

Average size of data transferred over the network for an 'objects info' message to be dealt with:

(no acknowledgements are sent here)v1_size ~ 8 + 162 ~ 170 bytes

1.4. SECOND OPTIMISED VERSION

Messages are integers sent as raw byte data (interpreted as signed integers by the receiver).

Control connects, Vision starts streaming data as fast as possible (one-directional approach).

Vision is server, Control is client (Control connects to Vision).

Size of a request or acknowledgement message Page 20 of 36

Page 21:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

(these messages don't have any arguments):2 integers * 4 bytes = 8 bytes

Size of a 'pitch info' message (this has 19 arguments):(2 integers + 19 integer arguments) * 4 bytes = 84 bytes

Size of data transferred over the network for an 'objects info' message to be dealt with:(no constant requests sent, data is streamed to clients after a single request)

v2_size ~ 84 bytes

1.5. QUANTIFYING PERFORMANCE GAIN (SERVES AS PROOF OF PERFORMANCE INCREASE)

The Average performance gain when sending 'object info' messages could be given as the ratio of sizes of data sent over the network for the second(optimised) and first protocol versions:performance gain ~ v2_size / v1_size ~ 170 / 84 * 100 ~ 202,3%

Page 21 of 36

Page 22:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

1.6. FIGURES

APPENDIX M: VISION SYSTEM

1. JUSTIFICATIONS

1.1. LANGUAGE AND LIBRARY CHOICE JUSTIFICATION

We decided to use the OpenCV library, because of its optimised vision algorithms (App. M.1.2). The C, C++, Python and Java languages were supported. Python was chosen, because research showed that OpenCV was faster and more easily used with the Python bindings than with the Java ones. Moreover, C and C++ were discarded, because the team was more common with Python. It is also a scripting language and does not require compilation. This speeded up development and testing.1.2. HSV COLOUR CHOICE JUSTIFICATION

The HSV (hue 0-360, saturation 0-255, value 0-250) colour model was chosen, because several papers and websites stated that its hue channel is not affected by lightness or darkness . This means increase in robustness when light conditions change. Moreover, some testing and observation proved that objects were better and more clearly isolated (object's features were less distorted). In OpenCV the hue channel can have values from 0

up to 179(multiplied by two). These values are unique for every colour, and thus I used them to find the objects on the pitch.2. STRUCTURE

All file names in the following figures are a relative to the project's Melmac.Vision/src/melmac/python/ directory.

Page 22 of 36

Page 23:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

Page 23 of 36

Page 24:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

3. CONFIGURATION

The vision system is started with the

Page 24 of 36

Page 25:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

trunk/Melmac.Vision/src/melmac/python/src/start_vision.py module. It can be configured through command line parameters or a settings file.3.1 USAGE

python start_vision.py [-s<settings_file.py> --no-cam –show<list*> -b <pervrny> ]

-s load custom settings file-b brightness in percent (e.g 50%)--show display visual feedback

according to the <list> whichcan contain the followingarguments:0 - coloured image feed with

all coordinates1 - binary image feed of ball

with coordinates2 - binary image feed of blue

robot with coordinates3 - binary image feed of yellow

robot with coordinates4 – binary image of region near

yellow and blue robots--no-cam use an image instead of live

camera feed

N.B. In case '-s' is supplied to load a different configuration file, please note that relative path to settings_file.py will not work with symbolic links. Also, the name of settings_file.py must be changed to something different than VisionSettings.py, otherwise Python will load the default settings file.3.2 SETTINGS FILE

Below is the default configuration file of the system. It lists and describes all the supported properties. A custom local configuration file can also be used.import os

# This will print exceptions on the standard output.DEBUG = True

# Stack trace depthDEBUG_LEVEL = 5

# Logging functionalityLOGGING = True

# LogfileLOG_FILE = os.path.join('..', '..', 'log', 'vision_server.log')

# In case --no-cam has been passed, the vision system uses a frame token to# work on. So this is the path to the frame token that is going to be used.FRAME_TOK = os.path.join('..', '..', 'frame.jpg')

# Time between server startup trials

STARTUP_ATTEMPT_TIMEOUT = 3 # seconds

# Number of server startup trialsSTARTUP_ATTEMPTS = 10 # times

# The port on which the vision server operatesVISION_SRV_PORT = 5000

# Determines whether server operates on the network or only on localhost.VISION_SRV_NET_OPEN = True # Fasle - on localhost, True - on network

# Timeout of queued messagesMSG_QUEUE_TTL = 10 # seconds

# The maximum number of pending requests the server acceptsMAX_PENDING_REQS = 3

# The maximum size of control system request messages in bytesMAX_REQ_MSG_SIZE = 60 # 2 bytes is one character

4. PROBLEMS

4.1. CORRUPTED OPENCV FRAMES (SECTION 7, FIGURE 1.)

OpenCV on DICE could not fetch frames from the camera properly. The left part of the image was distorted, image quality was low and the aspect ratio was wrong (see figure). Moreover, frames were interlaced very often. Aspect ratio should be 4:3 (~1.33) but is ~1.09. Pitch wall ratio should be 2:1, but is actually ~1.88. This means that distances are not homogeneous across the pitch and there will be error if a mapping from pixels to real distances is made. Moreover, frame rate dropped below 10fps frequently at random times – a big problem!4.2. BARREL DISTORTION

Barrel distortion was implemented for Milestone 2, but it actually reduced accuracy, because it changed the aspect ratio from ~1.9 to ~1.83. When applied to a clean undistorted image of size 640x480pix, the calibration matrix stretched the image and the goals of the pitch went outside it. A new calibration matrix was not calculated and the correction was removed.4.3. ROBOT HIDES BALL (SECTION 7, FIGURE 4)

This problem occurs when the robot is near the goals of the pitch. The angle between the camera-robot line of sight and the pitch decreases and the blind spot becomes bigger enough to hide the ball. This was partially fixed.

Page 25 of 36

Page 26:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

4.4. ERROR IN EXACT ROBOT LOCATION DETERMINATION DUE TO ROBOT HEIGHT

Because of the same effect (subsection 4.3), the centroid of the robot is wrongly detected on the pitch. This means that an inaccuracies will occur if a conversion from pixels to real distance is made. This problem was not fixed. Anyway, correcting this problem will only be valuable if barrel distortion correction is present so that the surface of the pitch map as close as possible to the real points on the pitch, otherwise the calculations (based on distances) that are going to be made will be wrong.4.5. FIRST METHOD FOR ORIENTATION AND

POSITION DETERMINATION (SECTION 7, FIGURE 3.)

The first method of finding orientation and centroid worked by locating the centroids of tow objects on the top of the robot - the T and a white block to the right of the plate. The idea was to connect these centroids and rotate the resulting vector V by 90° to make a new vector V' which would show the right direction. Other teams used two white blocks for this, but our physical design did not allow us to do this. The white block was needed as the HSV and RGB colour models could not describe the colour of the spot above the T's hat. No matter what the thresholds were, the spot never appeared as a selected group of pixels in the resulting binary image. Only a blurry outer contour was seen, but always appeared with the blue T and was undetectable. This first attempt was inaccurate, because the centroids of the two objects were flickering. So the vector V was not stable at either end and the direction moved too much.4.6. PROBLEMS OF THIRD METHOD FOR

LOCATION AND DETERMINATION (SEE SECTION 7, FIGURE 4.)

When the robot faced about 45° in each of the four quadrants of the 2D plane, the direction constantly swapped, being correct or exactly opposite. This happened, because the method works by comparing the amount of T pixels on both sides of the X and Y axes of a coordinate system starting from the T's centroid (see figure), and in these cases the two amounts are very close to each other. From testing, we noticed that the width of the error window was 10-15°. So the accuracy of the method was estimated to be:

((360 – 4 * 15) / 360) * 100 ~83%Just to ensure no errors occurred, when writing the correction for this method (described in subsection 5.5), we increased the error region to 20°.

5. SOLUTIONS

5.1 CORRUPTED FRAMES IN OPENCV

To fix this, we wrote a script that used the DICE command line tool v4lctl. The script purposely sets the capture device into a specific wrong mode and changes it to the correct one afterwards. The script with comments and a simple example are available in Appendix E, section 2. A clear frame can be seen in subsection 7, Figure 2.5.2 BARREL DISTORTION

We found a matrix from the code of last year's Group 5 which also used OpenCV. However, this matrix worked for images of size 768x576 and hid the goals when used on smaller images. So barrel distortion correction was removed from the system. Despite this, accuracy was still improved, because after the frame corruption problem was solved, the pitch wall ratio became ~2.0 which is correct in reality.5.3 ROBOT HIDES BALL (SECTION 7, FIGURE 4)

This problem was partially solved by returning the last known coordinate of the robot. Though not optimal, this works, because the robot would think it's on top of the ball and try to get behind it or kick which is reasonable and will most probably cause the ball to be revealed.5.4 SOLUTION TO THE PROBLEMS OF THE FIRST

METHOD FOR LOCATION AND DIRECTION FINDING (SECTION 7, FIGURE 4)

We implemented a method that reliably found the location and orientation of the Ts based on the Ts alone. It relied on the whole object and not on many separate ones. The method used normalised image moments to detect the direction by finding the longest line that could be drawn through the object and checking the skewness of the Ts.5.5 CORRECTING THE THIRD METHOD FOR

ORIENTATION AND DIRECTION DETERMINATION (SECTION 7, FIGURE 6, FIGURE 7.2 FOR SUCCESS RATES)

We wrote method that compared the angle between the direction vector returned by the 'moments' method (V2) and another vector, taken from a midpoint between two blobs cut from the T (V1).At first, the correction scanned a small circular ROI around the T to minimise outside noise. This was inefficient as heavy noise still created blobs in the ROI, big enough to flip its direction vector. So we changed it to redraw the T contour on an empty image instead. This eliminated all the noise and increased accuracy to 100%.The method uses the direction vector V2 (always

Page 26 of 36

Page 27:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

collinear to the T lengthwise, even if direction is swapped) to cut a region in the middle of the T to split it's 'hat' and remove it's 'base'. This leaves two blobs with centroids C1 and C2. Their midpoint is calculated and needed to compute the vector V1. The angle between V1 and V2 is then checked and if it's less than 90 degrees, the direction is swapped and needs to be inverted. Otherwise it is right and no inversion is done. To save CPU resources, the check is performed only when the robot enters the regions where the error can occur (subsection 6, Figure 4.).6. PERFORMANCE

The frame rate of the system is constantly above 23fps. When a static image is used instead of the camera feed the frame rate was almost 50, which meant that the system was fast enough to do it's job. Automated performance tests were done using the test client (Error: Reference source not found). The system was tested by connecting more than 100 clients (running on one computer along with the server) that received the data stream simultaneously – every client reported a frame rate of ~24 which meant that the system was not affected by such a number of clients. The test client was also used to verify the compliance of the Vision system with the established communication protocol.

Page 27 of 36

Page 28:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

7. FIGURES

Page 28 of 36

Page 29:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

Page 29 of 36

Page 30:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

8. CODE

8.1. METHOD FOR FINDING THE ANGLE BETWEEN T'S PRINCIPAL AXIS AND THE X AXIS OF THE PITCH."""This method returns the exact angle of the robot orientation using image moments. However, near the bisecting lines of the four quadrants, the angle can be the exact opposite of the real one."""def orientation_angle(self, moments):

cmoment11 = cv.GetNormalizedCentralMoment(moments, 1, 1)cmoment20 = cv.GetNormalizedCentralMoment(moments, 2, 0)cmoment02 = cv.GetNormalizedCentralMoment(moments, 0, 2)cmoment30 = cv.GetNormalizedCentralMoment(moments, 3, 0)cmoment03 = cv.GetNormalizedCentralMoment(moments, 0, 3)

try:orientation = 1.0 / 2.0 * math.atan( 2.0 * cmoment11 / (cmoment20 - cmoment02))orientation = math.degrees(orientation)

except:orientation = 1.0 / 0.01

if cmoment11 == 0.0 and (cmoment20 - cmoment02) < 0.0:orientation = orientation + 90.0

elif cmoment11 > 0.0 and (cmoment20 - cmoment02) < 0.0:orientation = orientation + 90.0

elif cmoment11 > 0.0 and (cmoment20-cmoment02) == 0.0:orientation = orientation + 45.0

elif cmoment11 > 0.0 and (cmoment20-cmoment02) > 0.0:orientation = orientation

elif cmoment11 == 0.0 and (cmoment20 - cmoment02) == 0.0:orientation = orientation

elif cmoment11 < 0.0 and (cmoment20 - cmoment02) > 0.0:orientation = orientation

elif cmoment11 < 0.0 and (cmoment20-cmoment02) == 0.0:orientation = orientation - 45.0

elif cmoment11 < 0.0 and (cmoment20-cmoment02) < 0.0:orientation = orientation - 90.0

elif cmoment11 == 0.0 and (cmoment20 - cmoment02) > 0.0:orientation = orientation – 90.0

try:skew_x = cmoment30 / (cmoment20**(3.0 / 2.0))

except:skew_x = cmoment30 / 0.1

try:skew_y = cmoment03 / (cmoment02**(3.0 / 2.0))

except:skew_y = cmoment03 / 0.1

if orientation >= (- 45.0) and orientation <= 45.0:if skew_x > 0.0:orientation = orientation

elif skew_x < 0.0 and orientation > 0.0:orientation = orientation - 180.0

elif skew_x < 0.0 and orientation < 0.0:orientation = orientation + 180.0

elif skew_x == 0.0:if skew_y > 0.0:orientation == 90.0

elif skew_y < 0.0:orientation == - 90.0

elif (orientation <= (- 45.0) andorientation >= (- 90.0)) or(orientation >= 45.0 and orientation <= 90.0):

if skew_y > 0.0 and orientation > 0.0:

Page 30 of 36

Page 31:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

orientation = orientationelif skew_y > 0.0 and orientation < 0.0:orientation = orientation + 180.0

elif skew_y < 0.0 and orientation > 0.0:orientation = orientation - 180.0

elif skew_y < 0.0 and orientation < 0.0:orientation = orientation

elif skew_y == 0.0:if skew_x > 0.0:orientation = 0.0

elif skew_x < 0.0:orientation = 180.0

if orientation >= (- 90.0) and orientation <= 180.0:orientation = orientation + 90.0

elif orientation >= (- 180.0) and orientation <= (- 90.0):orientation = orientation + 450.0

return orientation

8.2. FINDING THE T'S CENTROID USING MOMENTS

""" Returns centroid of blob given it's image moments. """ def get_contour_center(moments): spatial_moment10 = cv.GetSpatialMoment(moments, 1, 0) spatial_moment01 = cv.GetSpatialMoment(moments, 0, 1) area = abs(cv.GetCentralMoment(moments, 0, 0))

# Ensuring that threre is no division by zero. # PLEASE DO NOT TOUCH THIS, DO NOT TRY TO AVOID 0 DIVISION BY ADDING # A VALUE TO AREA BELOW, BECAUSE IT WOULD FAIL IN SOME CASES area = area or 0.01 return (spatial_moment10 / area, spatial_moment01 / area)

8.3. METHOD FOR CALCULATING VELOCITY VECTORS

"""The function returns two tuples. The first tuple has the data from an older frame.Please avoid having number_of_members = 0"""def simple_get_members(cache, number_of_members): if len(cache) <= 1: return None else: members = cache[(-number_of_members - 1):] return (members[0],members[len(members)-1])

"""Calculates the velocity vector for 60 sec.older_newer_elem_tuple = [((x,y),time),((x2,y2),time2)], where the first tuple isfrom an older frame."""def calc_velocity_vector(cache, number_of_members): if len(cache) <= 1: return (int(0),int(0)) else: older_newer_elem_tuple = Locator.simple_get_members(cache, number_of_members) if older_newer_elem_tuple != None: velocity_vector = Math.sub_vectors(older_newer_elem_tuple[1][0], older_newer_elem_tuple[0][0]) the_time = older_newer_elem_tuple[1][1] - older_newer_elem_tuple[0][1] try: velocity_vector = tuple([element*(1.0/the_time) for element in Page 31 of 36

Page 32:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

velocity_vector]) velocity_vector = tuple([int(round(element*60.0)) for element in velocity_vector]) except: return (int(0),int(0)) return velocity_vector else: return (int(0),int(0))

8.4. SCRIPT FOR FIXING THE CORRUPTED FRAMED OF OPENCV8.4.1. SCRIPT

#!/bin/bash

## UNIVERSITY OF EDINBURGH, SDP GROUP 2, 2011#

v4l_ctl=/usr/bin/v4lctl

# Set video source to Composite0 and mode to NTSC.# These settings are not desired, but are needed for the desired# to work. It seems that OpenCV can only react to the change if# these have been set prior to the correct ones.$v4l_ctl setnorm NTSC$v4l_ctl setinput Composite0

# Now setting the correct values.echo 'V4L: Setting video norm to PAL ...'$v4l_ctl setnorm PAL # set video norm to PALecho 'V4L: Setting video source to S-Video ...'$v4l_ctl setinput S-Videoecho 'Done. Exiting ...'

# Ready to go, have fun!exit 0

8.4.2. EXAMPLE OF USAGE WITH PYTHON

import cv, os# some random code here ...cap = cv.CaptureFromCAM(0)# It works if you invoke it before or after you query some frames.os.system('/path/to/your/scprit.sh')# some more random code here ...cv.NamedWindow('win', 1)while True: cv.ShowImage('win', cv.QueryFrame(cap))

APPENDIX N: STRATEGY SELECTION

How the reduction of performance, because of constant utility re-evaluation, can be minimised?1. Most of the factors on which the strategies

depend are used in a number of strategies. Therefore the system only needs to calculate each factor once and store the result for later use, instead of calculating it once for every strategy that depends on that factor. An example would be three different strategies that depend on whether the opponent is blocking the target goal or not. The system can calculate

the result of this factor the first time a strategy asks for the result and store this result. This way, when the other two strategies ask for the same result they receive the already calculated result, instead of issuing a new calculation. This way the overall number of calculations needed is reduced.

2. By smartly arranging the factors for each strategy, a strategy can easily be discarded by looking only at its first factor. For example, consider a strategy, which defends our goal and the first factor on which its utility depends is whether our robot is in possession of the ball.

Page 32 of 36

Page 33:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

In this example, if this factor is true attacking would be a wiser choice than selecting a defensive strategy, so by just checking the first factor the strategy can be discarded, without performing calculations for the factors that follow. By the same principle, if the first factor does not discard a strategy there is a chance that the second factor will, otherwise it could be the third one and so on.

3. If all the boolean factors are checked and the strategy is still not discarded this would mean that the strategy is applicable in the given situation. Some strategies however, do not only depend on boolean factors, some of them also depend on numerical factors. Such numerical factors are needed, since there are cases where multiple strategies is applicable, for example a situation where our robot is in possession of the ball and the target goal is clear. Two good decisions are to either shoot towards the goal, or decide to dribble towards the goal first. An example of such a factor might be the distance to the goal. In the last example, the further away our robot is from the target goal, the more meaningful it is to first dribble and then shoot, and the closer the robot gets to the goal, the higher the utility of the shooting strategy rises, until the shooting strategy gets picked over the dribbling one.

APPENDIX O: FINAL STRATEGIES

Explanation of the strategies used in the final day competition1. Aim to shoot – rotate to face a point on the

target goal, that is clear (the opponent is not in the way to that point). This strategy is picked when:

the ball is not in the opponent's possession

a clear shot towards the target goal is possible

our robot is in close enough to the ball to be able to kick it

our robot is not already facing the point on the target goal, that is clear.

2. Block opponent – move to a point, which is close to the opponent and is in the way between the opponent and our own goal. The closer our robot is to the opponent, the smaller the range where the opponent can shoot. This strategy is picked when:

the ball is in the opponent's possession

3. Defend penalty – get the orientation vector of the opponent and calculate the point on our

own goal-line, which the opponent is facing. Move to that point in order to block the opponent's kick. If the opponent rotates and faces another point, the strategy will send the robot to that new point. This strategy is manually selected and is executed until the ball has been kicked. At that point the agent picks another strategy and resumes normal play.

4. Do nothing – do not send any commands to the robot. This is used, when the robot needs to be repositioned, for example after a goal.

5. Intercept ball – calculate the velocity vector of the ball and get the nearest point from our robot to that vector. Calculate, if it is possible to intercept the ball at that point (this is done by obtaining a ratio of the ball's speed and the robot's maximum speed as well as a ratio of the distances from the ball and the robot to that point). If interception at that point is possible, then move towards that point, otherwise move towards the point on the vector, which is closest to our own goal. If the ball slows down, there is a chance of catching up with it and intercepting it at some later point. This strategy is picked when:

our robot is away from the ball the ball is moving the ball is moving towards our goal

6. Kick direct – kick the ball straight. This strategy is executed when:

a clear shot towards the target goal is possible

our robot is in possession of the ball (our robot is facing the ball)

our robot is facing a point on the target goal

a shot towards that point is possible7. Kick off wall – rotate to face a point on the

wall and then kick the ball towards that point to try to score with a rebound. This strategy is picked when:

a clear shot towards the target goal is not possible

our robot is in possession of the ballThe closer our robot is to the target goal, the higher the utility gets, because the ball has a higher chance of reaching the goal when the distance it has to travel is shorter.

8. Move behind ball – go around the ball, without hitting it. This strategy creates a path consisting of two points, both of them are either above or below the ball, depending on the positions of

Page 33 of 36

Page 34:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

the ball and our robot. The first point is also in front of the ball, and the second one is behind it. The second point is returned if it is possible to reach it without hitting the ball, otherwise the first point is returned. This strategy is picked when:

our robot is not behind the ball the opponent is not in the way between

our robot and the ball

9. Move straight to ball – go to a point, which is behind the ball and is close enough to the ball to be able to kick it. This strategy is picked when:

our robot is not in possession of the ball the opponent is not in the way between

our robot and the ball our robot is behind the ball

In addition, the distances from both robots to the ball are compared. The closer our robot is to the ball, compared to the opponent, the higher the utility gets. The opponent might be in possession of the ball in which case a defensive strategy such as block opponent will be more suitable.

10. Move to avoid opponent – move on a path around the opponent to go to the ball. The path consists of two points, both of them are either above or below the opponent, depending on the positions of the two robots. The first point is also in front of the opponent, and the second one is behind it. The second point is returned if it is possible to reach it without colliding with the opponent, otherwise the first point is returned. This strategy is picked when:

the opponent is in the way between our robot and the ball

11. Perform penalty - get the orientation vector of our robot and calculate the point on the target goal-line, which our robot is facing. If the opponent is not blocking that point, kick the ball, otherwise rotate to face the corner of the goal which is farther from the opponent and then kick the ball. This strategy is manually selected and is executed until the ball has been kicked. At that point the agent picks another strategy and resumes normal play.

12. Stop – stop the motion of all motors and wait for another command. This strategy is picked when:

our robot is behind the ball

our robot is moving towards either the top or the bottom of the pitch (the robot was approaching the ball from the side)

This strategy is needed, because of the delay of the kick command. If our robot is approaching the ball straight from the back our robot will dribble for less than a second before kicking the ball. But when approaching the ball from the side, our robot will continue moving after the kick command has been send and will not hit the ball. In such situation our robot first stops and then issues a kick command, thus eliminating that issue.

APPENDIX P: PATH FINDING

Examples of path finding techniques, i.e. techniques for handling extreme casesThe Stop strategy described above has been added to the agent with only one purpose - to deal with the extreme case, which arises from the delay in the kick command. In addition, strategies such as Aim to shoot, Kick direct, Move straight to ball and Move to avoid opponent have been modified to include some techniques that handle the extreme cases described below:1. Ball near the wall – when the ball is near the

wall, Move straight to ball will try to position the centre of our robot behind the ball, but that might not be possible, since the robot might hit the wall. A simple solution in this situation is to move to a point slightly above/below the initially set target point to avoid collision with the wall.

After our robot has gained possession of the ball Aim to shoot might be picked, but when the ball is near the wall the robot might not be able to kick it towards the goal, so instead of rotating to face the target goal, our robot will face forward. At this point Kick direct will be issued, which also takes into account the fact that the ball is near the wall.

2. Ball in a corner near our goal – in such a situation it might be risky to attempt to get the ball out of the corner. Instead, our robot attempts to defend its own goal. Move straight to ball and Move behind ball have been modified to take this situation into account – the robot will go to the corner of its own goal which is nearer to the ball. This behaviour will prevent the opponent from scoring if it manages to gain possession of the ball.

3. Ball in a corner near the target goal – attempting to score in such a situation is the

Page 34 of 36

Page 35:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

best solution, however if the ball is in the corner it is impossible to face the goal and kick the ball. That is why Move straight to ball and Aim to shoot have been modified to handle this situation. Once the robot is near the ball, Aim to shoot tells it to face forward in a similar way as when the ball is near the wall. In addition, Move straight to ball returns a move command to a point, which is slightly behind the ball and also a bit closer to the target goal. This way, the robot pushes the ball to the wall and then slides it slowly inside the target goal.

4. Opponent is removed from the pitch and is detected either on our robot or on the ball – to handle this situation, a simple check calculates the distance from our robot and the ball to the opponent robot. In normal situation that distance is always greater than at least half of the robot's length (otherwise two of the objects will be on top of each other, which is not allowed). Therefore, if the distance to either our robot or the ball is smaller than expected, the opponent robot is assumed to be outside of the pitch.

APPENDIX Q: TOUCH SENSORS

FIGURE : FRONT COLLISION, SIDE AND REAR COLLISION CURTAIN. THE REAR CURTAIN AND SENSOR CAN BE SEEN IN THE FAR LEFT OF THE FRAME IN THE SECOND PHOTO.

Ultrasonic Sensor

Touch Sensors

Pro and Con

Not Present 2 Simplest to design – one NXT senor on left and right side.Not very robust; will not detect y-axis collisions.

3 One on the front, two on the sides

4 One sensor on each side, front left and right excluded

5 Two on front, one per major side. Current design.

Present 2 One on each either side, ultrasonic sensor front mounted to detect ball

3 Ultrasonic sensor in front, two NXT touch sensors on the sides, and one RCX on the back. Provides sensing in every

direction, and additional distance functionality.

4 Ultrasonic sensor in front for distance functionality, the other sensors would be placed on the front left, and right, and the back left and right. The rear would be uncovered.

TABLE : COLLISION SENSOR CONTINGENCIES. WE HAVE CHOSEN FIVE SENSORS FOR THE MILESTONE

APPENDIX R: USER INTERFACE

The UI needed to fulfil two requirements: it must initialise the robot with given settings, and it must enable manual control of the robot and its strategies for manual testing. When the UI was underdevelopment, the info provided by the UI had to be mocked up, or manually changed for each demonstration, or match. The process to change the info was tedious, and unnecessary. Additionally, the constant modification of simple strategies meant every time a change was made to, the set of strategies that could be executed had to change. Again, that process was time consuming and unnecessary. Like other components of the robot, the UI had many iterations and goals. The UI aimed to incorporate many features: ability to set strategies, movements, colour, attacking side, and pitch location, and to monitor real time output of utility calculations and console output. The first cycle, only the abilities to select the colour, attacking side and basic movement commands (not strategies) were implemented. In the next update, the UI included the ability to select a single strategy. In the final cycle the UI could select multiple strategies. Unfortunately, the ability to view utility calculations and console output were unable to be implemented due to time constraints, but they were not nearly as important as strategy and info selection. While the implementation was not difficult, the NetBeans IDE has a built in GUI builder, integrating it proved to be more difficult. The only significant challenge to integrating the GUI was to alert other classes in a separate jar file when the UI had change. The dependency structure of our program disallowed communication from Melmac.App to Melmac.Core, but allowed it to travel in the other direction. Fortunately, the “Notificator” objects allowed us to raise messages which let other classes listening to the

Page 35 of 36

Page 36:   · Web viewCode reviews and multiple simplification, refactoring, and consistency passes were conducted to flush out bugs and make the frameworks easier to use. RCX code was tested

“Notificator” that information had changed.

Page 36 of 36