Teaching a Robot How to Dance


Introduction

For the U.S. to compete successfully in a knowledge-based global economy, more of our young people need to pursue careers in science, technology, engineering, and mathematics (STEM). Furthermore, because computing touches every field and powers the ubiquitous smart devices on which we increasingly rely, everyone needs to become fluent in the skills of computational thinking. Unfortunately too many of today’s students are turned off by technical subjects. Or, even if they make it through a conventional STEM course, they may not gain the kind of solid foundational understanding on which they can build. In 2011, a group consisting of Daniel Ozick, Tim McNerney, Lee Mondshein, and David Ng came together to collaborate on a project aimed to change that.

In December 2011, we invited two high-school juniors to test elements of an exciting new first course in computer science called “Teaching a Robot How to Dance.” Amalia and Ariana had no significant prior programming experience or interest in robotics. As you will see, the conceptual learning these students demonstrate by working on their own is truly remarkable.

“Teaching a Robot How to Dance” is designed around a final project that brings computation off the screen and into the world of physical objects and real-time interactions. Building on a graduated series of smaller projects, students will program a simple rolling robot to carry out a series of dance moves in time to music — while also following a partner robot and dealing with obstacles in the environment. The tempo of the music, the dance moves of the lead robot, and the map of the environment are not known in advance. You can play with a simple simulation that includes elements of the first part of the course here.

The dance project is designed to engage students’ interests and allow for artistic creativity; more importantly, though, students will need to apply high levels of computational thinking to complete the project successfully. We immerse students in a microworld — the robot hardware, the software that controls it, and the surrounding environment — and they learn how the microworld works through exploration and construction, simply by following their natural curiosity and solving problems. They develop confidence in what they learn because their understanding is grounded in theories they actively test themselves. [2:36]


Technical Details

One of the many building blocks that students will need to construct on their way to the final project is a means of maintaining the robot at a fixed distance from an infrared (IR) beacon. The beacon broadcasts an IR signal that the robot detects using four sensors, each one pointed away from the robot in a specific direction: front, back, right, left. Each sensor is most sensitive to IR that arrives head on (on axis); it has less sensitivity to IR that arrives at an angle (off axis). Based on the strengths of the signals received at the sensors, the robot must determine its orientation to and distance from the beacon.

Our robot is a differentially steered vehicle that moves by setting speeds for its right and left drive wheels. Because Amalia and Ariana are completing a task that is not assigned until two weeks (ten lessons) into the course, we streamlined the programming process by having them verbalize what they want the robot to do while Daniel (the instructor) types in the actual computer code. [4:54]


Part 1: Observing Follow-the-Leader Behavior

Amalia and Ariana begin the session by observing the behavior of a robot that has already been programmed for follow-the-leader behavior. As she pushes the beacon around the room, Ariana observes that the current follow-the-leader behavior is actually occurring in two discrete steps: (1) turning to face the beacon, and (2) maintaining a fixed distance from the beacon. While this example of follow-the-leader behavior is functional, it is not very smooth. [1:51]






Part 2: Investigating the Connection between the Robot and the Beacon

When the rod on the beacon temporarily blocks the IR signal to the robot, the robot behaves erratically and fails to follow the leader. This leads Amalia and Ariana to question if the detection of the beacon by the robot is affected by orientation, and which part of the beacon is actually being detected by the robot. By attempting to physically block the IR signal between the beacon and the robot, they are able to determine where the IR emitters are mounted on the beacon. [2:07]






Part 3: Learning about IR Sensors

Prompted by Daniel to brainstorm different types of sensing that can be blocked by a foot, Ariana guesses radio waves. When they struggle to come up with any other ideas, Daniel finally introduces them to the concept of infrared (IR) light emitters and detectors. Asked to pay close attention to the four LED lights (yellow, orange, green, and red) mounted on the top of the robot, Amalia and Ariana initially suspect that the LEDs turn on according to the motion of the robot (that is, when the robot turns right, the green LED on the right lights up). But when the robot backs up, the orange LED in the back does not light up. They then suspect that the LEDs are activated when the beacon is on the corresponding side of the robot (that is, the green LED lights up when the beacon is to the right of the robot). They are able to confirm this when Daniel deactivates the motors for the two drive wheels. When Ariana notices that an LED will light up even when the beacon is not on that side, Daniel demonstrates that the IR sensors on the robot can also detect reflected IR light. [3:03]

Part 4: Putting Follow-the-Leader Behavior into Context

After explaining how the two drive wheels on the robot function to move the robot, Daniel walks Amalia and Ariana through the task they are about to complete. They need to program the robot to follow the beacon using its two drive wheels and its sensor data. This is a building block for the final dance project. When Amalia and Ariana have trouble wrapping their heads around the task, he asks them to embody the robot. While Ariana immediately sees that the robot must move in the direction of the lit LEDs, Amalia suggests using the data from the sensors as feedback for a closed-loop control system that will maintain the robot’s orientation with respect to the beacon. Finally, Daniel introduces the concept of behavior-based programming. A behavior consists of a trigger condition and the set of actions to take when the behavior is triggered. Hopefully, complex follow-the-leader behavior will emerge from a set of simpler behaviors. [3:55]

Part 5: Moving the Robot

Daniel asks Amalia and Ariana what actions the robot should take to turn right. Ariana answers that the left wheel should turn while the right wheel stays still. This is an example of differential steering. When Daniel asks them to supply numbers (in millimeters per second) for the drive wheels, they hesitate. It does not matter how fast the left wheel is turning. The robot will keep turning to face the beacon until the green LED turns off. The speed only affects how quickly that happens. To give them a sense of how quickly the robot can turn, Daniel manually programs the left drive wheel to turn at 100 mm/s and the right drive wheel to turn at 0 mm/s. While the robot is turning in a circle, they look at the sensor data from the four IR sensors that the robot is receiving from the beacon. This sensor data is also being displayed in real-time in a graph on a computer monitor. [2:28]


Part 6: Programming the First Two Condition-Action Rules

For their first behavior, Amalia and Ariana propose that the robot turn right when the green LED is on. Daniel helps them translate this into a condition-action rule that he can program in computer code. (The language and programming environment used in this session are stand-ins for our actual software under development.) When Daniel points out that the data stream from IR_RIGHT is numerical, Ariana realizes that the turning speed for this behavior could be a fixed value or proportional to the signal strength received by IR_RIGHT. The first approach is known as bang-bang or on-off control and the second approach is known as proportional control. With Daniel’s help, Amalia and Ariana write a behavior for turning right and a similar behavior for turning left. Daniel types these behaviors into a new program for the robot. [3:20]


Part 7: Using IR Signal Strength as an Indirect Measure of Distance

When Amalia proposes that the robot should move forward when the yellow LED is on, Ariana points out that the robot actually needs to stop at a fixed distance from the beacon. Otherwise, the robot will collide with the beacon. This leads to a discussion about using IR signal strength as an indirect measure of distance. Ariana takes advantage of the available beacon and robot to confirm that IR signal strength and LED intensity are indeed related to distance. When Daniel shows them that numerical values for IR signal strength are also displayed in the sensor data graph, Amalia uses the real-time instrumentation to determine that the robot should stop when the signal strength of IR_FRONT is at 60. This information is used to program a third behavior for the robot. [2:41]


Part 8: Turning When the Beacon is behind the Robot

Amalia and Ariana move on to programming behaviors for when the beacon is behind the robot. Amalia wants the robot to turn when the orange LED is on, and the direction in which the robot turns should be based on whether the green or red LED is brighter. Ariana attempts to use absolute IR signal strengths to determine the turning direction, but using the real-time instrumentation to test her trigger conditions, she quickly realizes that the relative signal strength of IR_RIGHT and IR_LEFT is the true determining factor. Daniel helps them translate this trigger condition into computer code, and a fourth behavior (back_turn_right) is added to the program. [3:35]


Part 9: Understanding and Applying Behavior Arbitration

Ariana notices that there are cases when the orange and green LEDs are both on. Those conditions satisfy the triggers for the turn_right and back_turn_right behaviors, so which behavior is executed? In the behavior-based model we are using, only one behavior may be executed at a time; selecting the behavior to be executed is called behavior arbitration. The behaviors are ordered by priority: the highest priority behavior whose trigger condition is satisfied is the one that runs.

Amalia and Ariana quickly realize that the back_turn_right and back_turn_left behaviors can both be covered if the trigger conditions for the turn_right and turn_left behaviors are updated to IR_RIGHT > IR_LEFT and IR_LEFT > IR_RIGHT, respectively. Then they only need to handle the case when only the orange LED is on. In that case, the robot should move backwards towards the beacon. [3:25]


Part 10: Thinking through the Priority Order of Behaviors

While Daniel is entering changes to the turn_right behavior, Amalia and Ariana discuss the priority order for the turn_right, turn_left, and move_forward behaviors. After considering their options, they decide that the robot should turn to face the beacon first, and then move forward. They then decide that the move_back behavior, triggered by the orange LED, should be the lowest priority behavior. And since it is the lowest priority behavior, the trigger condition for the move_back behavior does not need to specify that the green and red LEDs are off: once a green or red LED is on, a higher priority behavior will be triggered. [2:21]


Part 11: Implementing Closed-Loop Control Using Feedback

While testing for all possible cases, Ariana notices that they have not handled the case when the robot is too close to the beacon. If IR_FRONT > 60, then the robot should back up. As Daniel enters this new back_up behavior, Amalia and Ariana decide that this behavior should have highest priority. With the robot backing up when IR_FRONT > 60 and moving forward when IR_FRONT ≤ 60, they have used feedback to implement a bang-bang closed-loop control system whose purpose is to maintain a fixed distance between the robot and the beacon. [3:00]





Part 12: Testing Version 1.0

After renaming some of the behaviors for consistency and finalizing the priority order, Daniel compiles Amalia’s and Ariana’s program and downloads it into the robot. When the robot is reactivated, it turns to face the beacon and moves forward or backward to the target distance. But instead of smoothly stopping and maintaining the target distance, the robot continues to jerk forward and backward. Ariana notes that the robot’s behavior is “too sensitive” and wonders if the priority order needs to be adjusted. Amalia notes that the robot backs up smoothly, but feels that the turn_right and turn_left behaviors need tweaking. [3:06]



Part 13: Diagnosing an Emergent Behavior

An emergent behavior arises from the interactions of multiple component behaviors. Amalia and Ariana did not program the robot to constantly jerk forward and backward: that behavior has emerged from the five simple behaviors that they did program. Emergent behaviors can be incredibly difficult to diagnose. Daniel guides Amalia and Ariana to analyze the robot’s behavior given what they know and using the available real-time instrumentation. Amalia identifies two possible causes for the emergent behavior: (1) the closed-loop control system for maintaining distance from the beacon is constantly triggering the robot to move forwards and then backwards; and (2) their turning behaviors move the robot forward, which then triggers a move backwards. After analyzing the IR sensor data from the robot, they decide to implement a deadband (sometimes called a neutral zone) for the distance control system. This means that the move_forward and move_back behaviors will not be triggered when the signal strength of IR_FRONT is between 55 and 65. [3:14]


Part 14: Testing Version 2.0

As their program is compiled and downloaded into the robot, Amalia and Ariana discuss the range of their deadband. Unfortunately, they do not have enough information to make an informed decision about it. Daniel explains that if there is a value they would like to tweak, they can store it in a variable or parameter. Once the robot is reactivated with the new version of the program, it displays an emergent behavior that looks similar to the previous one. By turning off the motors to the drive wheels, Amalia and Ariana are able to use the real-time instrumentation to set a range for their deadband. They decide to expand it to signal strengths between 40 and 80. [2:28]


Part 15: Testing Version 3.0

As their program is compiled and downloaded into the robot, Amalia and Ariana discuss adding deadbands to their turn_right and turn_left behaviors. Once the robot is reactivated with the new version of the program, it displays another undesirable emergent behavior, producing a motion that’s even jerkier than the one they observed before. Daniel prompts them to analyze the emergent behavior by looking at the specific behaviors running in real-time, but Amalia is fixated on adding deadbands to the turn_right and turn_left behaviors. She turns off the motors to the drive wheels and uses the real-time instrumentation to set a range for them. Meanwhile, Daniel prompts them again by asking what the robot does when none of the trigger conditions are true. The answer: the robot simply continues doing what it was doing. This leads to the addition of an idle behavior (both drive wheel speeds are set to 0 mm/s). The trigger condition for idle is always true, but, since this behavior has the lowest priority, it will only run when no other behavior is eligible to run. [4:15]


Part 16: Testing Version 4.0

Once the robot is reactivated with the new version of the program, it continues to display a jerky emergent behavior. However, this time, the robot is also not completing its turn to face the beacon: it is stopping short. Daniel offers to update the instrumentation so that the second oscilloscope displays the name of the behavior as it is running in real-time. (This was an alpha version of the software where the instrumentation display did not update automatically.)

While Daniel works on the instrumentation, Amalia and Ariana decide to change how the robot turns so that it does not move forward at the same time. By changing the drive wheel settings, the robot can turn in place instead of pivoting around one wheel. And when she gets a chance to see which behaviors are running as the robot jerks repeatedly forward and backward, Ariana sees that the robot is constantly flipping between the move_forward and orange behaviors. She realizes that, even though the orange behavior is lower priority, it will still trigger when the robot enters the deadband for IR_FRONT. So they decide to modify the trigger condition for the orangebehavior to be IR_BACK > 20. [4:23]


Part 17: Testing Version 5.0

Once the robot is reactivated with the new version of the program, it no longer jerks forward and backward when it has reached the target distance. But the robot is still stopping short and not completing its turn to face the beacon. After deciding to narrow the range of the deadbands for turning and increasing the overall speed of the robot’s motion, Ariana comes across an unexpected bug. When the beacon is directly behind the robot, the robot continues to move forward, ultimately colliding into the side of a floor lamp and forcing Ariana to shut it down. Knowing which behavior is running when the bug occurs enables Ariana to quickly diagnose and fix the problem. The move_forward behavior is being triggered when the beacon is behind the robot because its trigger condition is IR_FRONT < 40. Amalia and Ariana add a second condition, IR_FRONT > IR_BACK. [4:35]


Part 18: Testing Version 6.0

With version 6.0., robust follow-the-leader behavior appears to emerge. The robot now turns completely to face the beacon, and when the beacon is placed behind the robot, the robot backs up and then smoothly turns around. Amalia and Ariana decide to tweak the robot’s programming by narrowing the deadband for IR_FRONT (so the feedback loop maintains tighter control of the robot’s distance from the beacon) and changing the trigger condition for the orange behavior to exclude high values of IR_BACK. When the robot is backing up toward the beacon, they do not want the beacon to get too close. Unfortunately, in making this change, they remove the requirement IR_BACK > 20, a condition they had added in version 5.0. [3:59]


Part 19: Testing Version 7.0

Without the IR_BACK > 20 requirement in the trigger condition for the orange behavior, the robot returns to constantly flipping between the orange and move_forward behaviors. Ariana realizes that if IR_BACK = 0 is enough to trigger the orange behavior, then the idle behavior will never be executed and the robot will jerk forward and backward. However, she is reluctant to specify a higher signal threshold for IR_BACK because she wants the orange behavior to trigger if IR_BACK can sense the beacon at all. Her solution is to trigger the orange behavior only ifIR_BACK < 90 (so the robot does not get too close while backing up) and IR_BACK > IR_FRONT. [3:30]


Part 20: Testing Version 8.0

By this point, fairly robust follow-the-leader behavior has emerged (there is still one edge case not being handled effectively). In fact, the program created by Amalia and Ariana is more robust than the basic program we wrote ourselves. [1:58]















Debrief

At the end of the session, Amalia and Ariana discuss what they thought of the session, if the final project for the course would be an interesting problem, and any prior experience they may have had in robotics.

Amalia says that the session “was fun,” and “not too difficult to understand.” She thinks that getting the robots to dance “would definitely make kids who accomplished it feel proud.” Ariana liked the session and thought it was “interesting.” She also thought that the learning in the session was “cool.” She says that “it was also interesting to figure out how the priorities work and what it’s sensitive to,” and that “when we fix something that’s a problem, it’s rewarding.” [1:57]




Commentary

Amalia and Ariana arrived for our learning session as computer science and robotics novices. We immersed them in a microworld consisting of the robot, its well-defined sensing and actuation capabilities, and the behavior-based programming language that allowed them to specify the connection between sensing and actuation. By the end of our 90-minute session, they were experts in the microworld, and were able to design a behavior-based program that enabled the robot to follow a beacon and maintain a fixed distance.

Along the way, they learned about:

  • IR signals
  • IR emitters and detectors
  • Closed-loop control systems using feedback
  • Behavior-based programming
  • Basic differential steering
  • Condition-action rules
  • Bang-bang (on-off) control
  • Proportional control
  • Signal strength as an indirect measure of distance
  • Using real-time instrumentation
  • Behavior arbitration
  • Priority order for behaviors
  • Testing for edge cases
  • Emergent behaviors
  • Diagnosing emergent behaviors
  • Deadbands
  • Calibrating sensors
  • Debugging cycles/engineering design processes


By the end of the session, most of these skills and concepts had been so firmly integrated into their understanding that they were applying and constructing on top of them instinctively. Notice how quickly they adapted to using the behavior-based programming model, real-time instrumentation, and other tools available to them (for example, thinking through the priority order, isolating the sensors from the drive wheel motors, observing the LED lights and data streams for the IR sensors, and analyzing the currently running behaviors) to problem solve with minimal guidance from the experts in the room. They were constantly probing, analyzing, and discussing between themselves while Daniel (the instructor) was occupied updating and compiling their program, and downloading it into the robot.

The purpose behind teaching through microworlds is to construct a foundation solid enough for students to be able to complete the final project for the course and eventually transfer those skills and concepts to other domains. To move from crude follow-the-leader behavior to graceful improvisational-partner-dance behavior, students will need a more sophisticated understanding of differential steering, parameterized dance-move procedures, and some level of proportional-integral-derivative (PID) control, among other things. And their microworld will have to expand to include concurrency explicitly as they deal with multiple data streams (including cliff sensors, bump sensors, and virtual or derived sensors).

The point of this course and its interdisciplinary and artistically creative final project is not simply to engage students. Although engagement is essential for active and constructivist learning, engagement in itself does not lead to deeper understanding or improved capabilities. We go further: immersing the students in a microworld that they can explore on their own, giving them the tools to understand that microworld, and insisting that they build on and extend their understanding through problem solving. Working in the microworld puts students at the center of an active, constructive learning process.

While the concept of a microworld is not new, most microworlds are computer simulated and have no physical elements. This places a filter between the student and the world itself, as the student’s interaction with the microworld is mediated through someone else’s understanding. By taking the microworld off the screen and into the real world of physical objects and real-time interactions, we give the students a greater sense that they are building their own understanding on ground truth, and that what they learn is both meaningful and transferable.

Our physical, behavior-based, real-time microworld is also inherently richer and more realistic than many computer-simulated microworlds: emergent behaviors that arise from simple condition-action rules firing tens of times per second based on real-world (noisy) sensor data are inherently more complex and more interesting to analyze and diagnose than behaviors generated by scripting or simple procedural programming. Learning how simple and easy-to-understand local mechanisms generate the complex global behaviors we observe gives students a powerful model that has implications beyond computer science.  It will encourage students to drill down to fundamentals, and help them to understand how macroscopic phenomena can be grounded in particle interactions on the molecular scale or how evolutionary outcomes can result from the operation of natural selection and genetics.

But our goal is not to produce a new generation of roboticists, computer scientists, or computational thinkers — although that is certainly important. This course will be successful when it initiates a virtuous cycle for students. By exploring and constructing their own understanding of a microworld, Amalia and Ariana were capable of independently solving a rich problem. As they continue to explore and construct ever-deepening understandings, their capabilities will grow along with the problems they can solve. Over time, this will affect how they engage with new problems and new microworlds, altering how they approach learning in general and how they perceive themselves as learners and thinkers. The content of this course is based on our personal interests and expertise in computer science and robotics, but the principles and processes involved are much broader than that.

© 2012 Computing Explorations, LLC