One night, my father wakes to strange voices and crashing sounds. He ventures slowly into the living room where he sees me—his drowsy six-year-old— watching BattleBots in my footie pajamas.
“What are you doing?” he asks sternly.
“I couldn’t sleep,” I reply, without tearing my eyes away from a bot spinning madly around its opponent, looking for an opportunity to attack.
“Go to bed,” my father says, barely concealing laughter behind the order.
I stall, distracted by Bill Nye, who appears as a floating bobble head on the screen to explain how the 48 by 48 foot battle arena is rigged with obstacles that the robot competitors can use against each other: steel spikes encircle the ring, pop-up rotary saws are concealed in the floor and a giant hammer in one corner can crush circuitry. I watch as two teams compete, each equipped with a complex remote control that they use to drive their robots. The only objective: to incapacitate or adequately destroy the opponent’s carefully constructed creation. Teams build, operate and repair their bots through several rounds of competition, and the bot’s instruments of destruction range from hammers to spinning blades to flippers, the last of which turns a bot into a beetle on its back, unable to move and vulnerable to attack.
I also didn’t move, too involved with the battle to fully register my dad’s command. “Go to bed,” he repeats, raising his eyebrows. I fumble with the remote briefly, stalling as he waits, and I realize that my late-night viewing, which I think of as an upgrade from sleepless tossing and turning, still falls under my father’s jurisdiction.
Eventually I power down the television, listening to its sharp static blackout before I stumble back to bed, leaving Deadblow to shred Kegger into steel strips with a sharpened blade anchored on the end of a hammer pivot.
In my first year of high school, I was required to complete a science experiment for Mr. Ivie’s Earth Science class. At the time, I was engrossed in The Universe in a Nutshell and the complex world of cosmology, so I told my dad one evening that I had picked light for my topic.
“What about light?” he asked.
“I don’t know—maybe the speed of light? Or something with prisms?”
My dad paused, thinking briefly, and then told me to come look at a book he had in the basement. I followed, watching as he dug out an old physics textbook and flipped to an early chapter.
“Here,” he said, pointing to a section on the original measurement of the speed of light. Beneath the text was an image. Galileo stands on a grassy hill, while his assistant stands on a hill a mile away. They both hold glowing shuttered lanterns. According to the following explanation of their experiment, Galileo first un-shutters his lantern and begins counting. When his assistant sees the flash of the lantern, he un-shutters his own, and Galileo continues counting until he sees the response. His idea is that the speed of light could be found by halving that time, but the experiment is a failure because light is imperceptibly fast, too fast to be counted accurately.
Together, my dad and I developed an experiment based on Galileo’s. Since we already had a reliable calculation of the speed of light, 299,792,458 meters per second, I used the same technique to measure sound. My question was whether the humidity of the air would affect the speed that sound travels.
For our sound and light, my dad wired a starting pistol to a camera flash. I took a stopwatch and notepad. We found a stretch of relatively abandoned road where my dad dropped me off before driving a mile further. He would then trigger the gun and camera flash together, while I waited until I saw the light to start the timer, halting it when I heard the crack of the starting pistol.
My father and I communicated this way for several trials, measuring the humidity over the course of the week. One day we tested in the pouring rain: 100% humidity. Hair damp and clinging to my cheeks, I shivered and wondered whether Galileo’s assistant resented his superior for the hours of tests they ran on hilltops, shuttering and unshuttering lanterns and trying to count faster than humanly possible, wondering if light was even measurable or was actually infinitely fast as others had asserted. I wondered if he sighed with disappointment when their tests turned out to be failures.
When I ask that question now, my impulse is to say no. The assistant would likely lament that he remains unnamed in all accounts of the experiment. Yet the principle led to a more successful experiment in 1660, eighteen years after Galileo’s death. Another of Galileo’s assistants, Vincenzo Viviani, took Galileo’s concept for measuring the speed of light and applied it to the speed of sound, using a shuttered lantern and cannon. Viviani held the impossibility of Galileo’s experiment in mind and assumed that the speed of light is practically infinite when compared to sound, and his experiment resulted in the most accurate speed of sound recorded up to his time.
Altogether it points to the collaborative nature of science: if not Galileo, then Viviani with his cannons, or astrophysicists with 62 mile long laser experiments, or high school girls with a starting pistol and camera flash on abandoned country roads.
On that country road, my dad tells me over the walkie talkie that he’ll fire the starting pistol one last time, as soon as a straggling truck passes me. Soon I see the flash, jam my thumb on START and press again, as quickly as possible, when the first wave of the pistol’s crack reaches my ears.
“Got it?” my dad asks over the walkie talkie, as I record our last time for the day.
“Got it,” I answer, and my dad drives back to pick me up, showing me the recently broken connection between the camera flash and the starting pistol.
“I had to try to trigger them both at the exact same time manually,” he said.
“Another point for human error,” I said, making a note to include that in my write-up.
In 2013, President Obama announced the Brain Research through Advancing Innovative Neurotechnologies program, an intiative that funded neuroscience research and brought attention to what he termed “the next great American project.” The goal of the project was to create a working map of the brain, from the level of each synapse to a more complete map of the whole organ. Today, scientists continue to develop new technologies for imaging the brain’s parts and also track the activity of mental processes using existing technologies. There is great potential for an accurate brain map, the applications ranging from genetic therapy to neurosurgery to robotics. The result is a bridge that scientists are building from both riverbanks: neurologists create a model of the existing brain and Artificial Intelligence researchers are using that human model to create artificial brains, meeting them in the middle. Through the combination of neurology and AI, we are moving toward a world in which we understand more of the basic composition of the brain than ever before.
However, the end result of our mapping and engineering may be a technological singularity, the point in time when robots surpass the limits of human cognition. The singularity signals the end of any remaining illusions of the superiority of human intelligence, and it often appears to loom closer and closer: Robots have defeated humans in the Turing Test, a competition where programmers must successfully develop conversation bots that judges cannot discriminate from humans in online chats. Robots can play soccer cooperatively, guide museum visitors through galleries and map mines that humans cannot access. All things considered, the singularity seems uncomfortably near at hand.
Vernor Vinge, a mathematician, computer scientist and science fiction author, wrote in 1993 that the technological singularity would bring about the end of the human era. His essay is surprisingly optimistic compared to its dire beginning. Though Vinge argues that the singularity is only 30 years away (meaning that now we apparently only have 8 short years until it is upon us), he also advocates human/robot collaboration and reminds us that when a singularity occurs, we are ultimately the initiators. Humans will bring themselves into the next era, adjusting the camera so that we are no longer center screen. Fear of a robot uprising runs deep, not only in science fiction, because we would be the perpetrators held responsible, the ones in control of our own destruction.
And so Vinge describes the history that humans can perceive as a “vesica piscis,” or lens shape: time begins with a single point of singularity, the Big Bang, and then curves outward before reaching the largest point of expansion and curving back inward to the final technological singularity. By definition, we cannot see what came before the Big Bang, just as we will not be able to see what comes after the end of human time. Future models depend on our superiority. When robots run the show, our predictions naturally fail. Running through Vinge’s essay is a fear of the end of life being like the beginning, rounded and made circular by a repeated singularity. Yet self-sacrifice is the only means of creating immortality—if not for us, for our machines. As Vinge states, if the human mind is to evolve further, it may be necessary to grow beyond our biology. He phrases it directly, “To live indefinitely long, the mind itself must grow…and when it becomes great enough, and looks back…what fellow-feeling can it have with the soul that it was originally? Certainly the later being would be everything the original was, but so much vastly more.”
The essentiality of the robot is a recognition that our minds too, are computers that can be mapped like circuit diagrams and programmed to produce certain outputs by a succession of genes and environment. If we succeed in creating a BAM, we also must surrender to the fact that we are mappable, that the degree of individuality we possess is limited by our dependence and inability to escape context. If we can create autonomy in AI, then it doesn’t exist, it can be reduced down to programming, an existent structure that cannot be escaped. Surrendering degrees of choice is a slippery slope to determining that we do not have any.
Kegger vs. Deadblow: Round 1
“To my right, like a terrier in heat, other robots face defeat,” the ring announcer bellows into a microphone, “here is DEADBLOW!”
Grant Immahara, a Mythbuster-to-be, stands behind his bot, beaming, and the camera pans in on the bot’s weapon: a 3/8 inch solid titanium hammer built to rip into the enemy bot with a pressure of 1,500 psi.
“And in the blue square,” the announcer continues, “with a dual major and no hope of graduating, here is KEGGER!”
Kegger looks like it’s made of cardboard. It’s a sorry scrap of a bot—a wedge-type engineered to slip under its opponent and flip the enemy robot over, leaving the other bot vulnerable to attack. It’s bigger and heftier than Deadblow, but with a much more cumbersome design. As a fan of Immahara, who is a BattleBots regular, I give Kegger no chance.
Immahara moves Deadblow in on Kegger in the first 10 seconds, cutting off any chance of escape while the pointed hammer descends, piercing the heavier bot over and over again, its hammer bobbing like a drinking bird. Eventually Kegger’s side began to tear and Deadblow rams the panel off the bot, revealing its vulnerable center. I imagined Grant squealing with delight—Deadblow passes the test. Kegger’s armor is nothing more than the lid of an anchovy can, just waiting to be peeled back. Often BattleBots comes down to this—one construction conquering another. It’s all about how they’re built.
My science fair project placed first in my high school, giving me a chance to take it on to the county-wide competition held at the local community college. I told Mr. Ivie I would go, and let my dad know the news as soon as I got home. He smiled and congratulated me, agreeing to take me to the county competition next weekend.
The day of the competition, I set up my poster next to a (predictable) experiment on how music affects the growth of plants. I took care to center the poster perfectly, adding extra tape to a graph that was peeling loose. My dad watched, and commented on how my design made the poster stand out. We walked around a bit and took a look at a few of the other posters. I began to feel hopeful as I saw more science fair repeats: boiling water with and without salt, the potato battery, and a test of different parachute sizes. No other experiments focused on the speed of light, though there was a project on solar energy and house siding that made me nervous. It had the air of applicability that made my “pure science” project look less appealing.
Eventually the judges told everyone to leave the room so that they could look at the posters and discuss before calling us in for our presentations and Q&A. My dad saw a doctor he worked with at the hospital, and they shared a nod and hello as we left. I jealously wondered if the connection would boost my chances, before remembering where I was—in a county where everybody knew everybody. The bias was spread around more or less equally.
Mr. Ivie and my dad chatted as we waited, hitting it off quickly because of their shared science backgrounds. My teacher was interested in the work my dad did at the hospital with computer servers, and I was grateful to extract myself from the conversation to think about what I would say when the judges called me in.
My name was called soon enough, and backed by my dad’s encouraging smile, I entered the maze of posters, standing by my own. “Tell us about your experiment,” one of the male judge prompted.
“Well I wanted to know more about the speed of sound, and knew that we had a measurement of it, but didn’t know how that was affected by the humidity in the air. So my dad…I mean so I got the idea to wire together a camera flash and starting pistol, and my dad was able to do that, and then we found a pretty deserted road and stood a mile apart. I had a stopwatch, and waited until I saw the flash from the camera and started it. When I heard the noise, I stopped the watch and recorded the time. We recorded on days with different humidities, one day in the rain as you can see on the far end of the graph here,” I smiled, and the doctor my dad knew laughed a little. “So then I used the speed of light to find out the speed of sound, and here’s the result.”
“What was your conclusion?” a female judge asked.
“Oh, well I found that sound travels faster when the humidity is higher. I would have to do a lot more trials to get an accurate measure of how much faster, though.”
The doctor asked about the equations I had pasted to the background using stationary stickers.
“Just some equations for determining different things related to sound and light. This symbol here stands for light, I know. The rest I’m not sure about.” I faltered, embarrassed to be asked something I didn’t know the answer to. “That’s the Enola Gay,” I said, pointing to a picture I thought his eyes had darted to in order to fill the space. “The first plane to go faster than the speed of sound.” He nodded. Obviously he already knew that.
I left the room, feeling uncertain about my presentation. I hadn’t mentioned the background research I had done on the effect humidity has on the density of air, and the reasons for the faster speed. Instead I had let myself become distracted and intimidated by a relatively irrelevant question about my equations-as-decoration. Worst of all, my presentation made it sound like my dad had done all the work, that it had all been his idea. As I sunk further into my disappointment, I asked myself, well—wasn’t it his idea?
A long hour later the judges called us all into the room and announced the winners. They started at the bottom, and I cringed when my name was announced first.
“Megan Mericle, for How Does Humidity Affect the Speed of Sound…Honorable Mention.”
I manufactured a smile for the photo op, taking my puke-green ribbon and poster out of the room as soon as we were done. Mr. Ivie stopped me and my dad as we left and noticed my less-than-enthusiastic expression.
“I don’t know what those judges were thinking,” he said. “I looked at all the presentations in that room and hers was the most complex out of any of them. I’ve seen that solar power experiment online before.” My dad glanced behind him before agreeing.
“Forget science fairs,” he told both of us. “You need to be doing Science Olympiad.”
The encouragement bolstered me, and Mr. Ivie promised to give me some materials about the club on Monday. On the way home my dad mentioned his colleague. “He’s a medical doctor—I’m really surprised they chose him to judge a mostly physical and natural science competition. And I think one of the other judges is an administrator at the hospital, with no science background at all.”
I could tell what my dad was insinuating, and as he kept talking it became clear. With different judges, the complexity of my experiment would have been seen. With different judges, it might have won. I looked down at the Honorable Mention ribbon in my lap. Underneath my immediate excitement about Science Olympiad and pride over stumping the judges, I considered that what my project really needed was a different presenter. Either I had failed to convey the complexity of my project, or the project had never been mine to begin with and the judges had seen that. I dismissed the idea quickly, pushing it away to be dealt with at another time.
A research division of the US Department of Defense holds a contest called the DARPA Robotics Challenge, which tests the engineering and technological progress of robotics programs around the world. The DRC focuses especially on autonomy, meaning that the robots who compete are moving and making decisions based on prior programming, without human input.
Every year, the challenge changes to reflect a natural or man-made disaster area that would require robot rescue operations. In 2015, the competition area was modeled after the nuclear disaster sites of Fukishima. Teams engineered bipedal robots that could drive a vehicle to the site, exit the vehicle, operate a doorknob, utilize power tools, turn a valve, ascend stairs and navigate rubble in the form of scattered lumber and cinderblocks. In addition, there was a daily “surprise” challenge and the event designers introduced radio jammers, simulating the potential communication hazards that a nuclear site would produce.
But 2015’s DRC turned out to be more of a slapstick variety show than a robotics showcase. The DARPA officials decided to remove the safety tethers that had, in previous years, kept malfunctioning robots from toppling over. The result: A fleet of million dollar creations crashed, collapsed and dropped drills as spectators groaned with each failure. The competition looked like a series of America’s Funniest Home Videos-like prat falls and some of the best were compiled in a video set to the Sinatra hit “Love and Marriage,” making the slow falls look like the lovestruck robots were fainting from emotion, or simply giving up due to the difficulty of operating a common doorknob. Reporters joked that a new genre of robot vaudeville had been born. Spectators sympathized as robots crashed exiting their vehicle, reaching for a doorknob, stumbling over cinderblocks and keeling over backward on the stairs due to their top-heavy design. One robot fell so hard his head popped clean off his body. Others severed cables and bled oil, making the course look like a crime scene.
Anthropomorphizing the engineering marvels becomes much easier while watching them fail miserably, and the soundtrack makes the whole procedure soothingly funny. Without a price tag for damages accompanying the footage, watching DARPA fails is like watching precarious infants fall minus the empathetic concern. Another reminder that the robots we fear are still in their infancy when it comes to self-direction and autonomy. Steve Crowe from Robotics Trends writes that “the thought of a robot uprising seems like science fiction after watching this video.” Though viewers aren’t likely to be thinking of the future of humanity while watching robots fail to turn doorknobs, reporters continually mention robotic takeover in their headlines. Successes supposedly bring us closer and failures hold off the inevitable technological singularity.
Kegger vs. Deadblow: Round 2
Deadblow moves in for the kill, hoping to rip out some of Kegger’s circuitry through its exposed side, like pulling entrails from a carcass. But its hammer head gets stuck in Kegger’s side instead, and Deadblow reverses madly, trying to extract its weapon from the other bot. Kegger takes the opportunity to push Deadblow into the spikes surrounding the arena, slamming it against one wall before centering it perfectly over a pop-up rotary saw obstacle in the arena floor that catapults the smaller bot into the air.
A quick camera angle reveals that Deadblow’s last hit has done more than rip Kegger’s protective armor. The spiked hammer head is embedded in Kegger’s college-budget control board, damaging the relays and microswitches that allow the builders to communicate with their bot. A reaction shot shows a doomed expression on a Kegger builder’s face that contradicts his 90s boy-band hair and cargo pants. The college team’s budget design is quickly falling victim to Deadblow’s more sophisticated build.
Grant Immahara frowns at the ring, thumbs moving frantically in an attempt to pull Deadblow’s hammer from Kegger’s side so he can deliver the finishing blow. The force of the hydraulic hammer is enough to rock Deadblow’s back end in the air like a see saw, and Kegger does its best to keep Deadblow from extracting, in order to continue dragging the other bot around the arena and pushing it into torturous obstacles.
Deadblow finally manages to wiggle free, and it waves its hammer arm in the air a few times in celebration. When it circles back around and targets Kegger, it becomes clear that Deadblow’s hammer ripped out essential circuitry when it pulled free. Kegger’s hammer arm makes a few pitiful attempts to rear its head before dying. Deadblow abandoned its weapon and accelerated toward Kegger, ramming the square bot into the corner and trapping it with spikes. The move jars the last bit of circuitry loose, and Kegger is suddenly inoperable.
Kegger’s team decides, in good spirit, to surrender the rest of their time and let Deadblow hole punch the robot that had probably added half a year or so to their student loan payments. Grant grins maliciously through his goatee, taking apparent joy in ripping off the rest of Kegger’s panels until the remaining 20 seconds run out.
Ceremoniously, both teams enter the ring and stand by their bots in the center, the ref gripping the arms of both lead builders.
“The winner, by a knockout, in the red square….DEADBLOW!” The ref raises Grant’s arm, a remote control lifted in Grant’s other hand, its silver antenna flashing in the stadium lights. Victory for bot and human.
I signed up for Science Olympiad, hoping that the competition would offer me a chance at a comeback. Some of my friends had competed the previous year, so I had a basic idea of what to expect, but I was surprised when I saw the variety of events on the sign-up sheet. One of the events was called “Electric Vehicle,” and with images of BattleBots dancing in my head, I checked the box and was placed in the event with my friend Meredith.
Meredith and I met after school on the first day of practice, and Mr. Ivie handed us a toy truck and rotary tool, suggesting that we salvage the motor from the toy for our own vehicle. Our car had to be able to travel a target distance, using only electronic rather than physical braking mechanisms. The problem was that we wouldn’t know the distance until the day of the competition, so we had to build a car that could be set to travel a large range of distances. We tore into the plastic covering of the toy truck Mr. Ivie provided in an attempt to look inside, cutting a few cables in the process. Meredith had to leave not long into our work, but we agreed to meet up again at my house that weekend.
Mr. Ivie had actually lucked out at the science fair by recruiting both a competitor and a mentor, as my dad was immediately excited by the idea of Science Olympiad and signed on to help. I brought the rule sheet home and gave my dad a chance to look at it. By the time Meredith came over, he had already dragged out the balsa wood he had from model airplane building, an Exacto knife set, circuit components, tracing paper and the soldering iron.
“So, what are we building?” he asked.
“An electric vehicle,” I answered, putting our rough sketch design and the toy truck motor on the craft table.
“What did you do to this?” my dad asked, inspecting the severed cables and rough-hewn edge where I had stripped off the plastic covering.
“The saw thing got away from me I think,” I said.
“Where’s the drive shaft?” he asked.
“Uhhh, what?” Meredith said.
“The front portion. Between the wheels. It should be connected.”
“We took that off,” I said. My father sighed, and I jumped in, “What? We didn’t know. Mr. Ivie said to take out the engine. We did that. Besides, we should probably make a new drive shaft anyway according to these rules.”
“Actually, you’re probably right about that,” my dad said.
For the next couple of months, we continued to build our vehicle under my father’s guidance and unhidden enthusiasm. Throughout it all I was the assistant fulfilling my dad’s design: I held balsa wood together while he ran thin lines of superglue between the joints. I pinned the assembled pieces on a blueprint my father traced with a ruler and protractor, using me as a motorized pencil who responded to instructions with sarcastic quips that amused Meredith. I worried, as my dad taught us how to solder and teased me for putting parts on in the wrong order, that he was becoming less of a mentor and more of a competitor. The rules were fuzzy on how involved parents could be, but in the assembly before the competition began, weeks from now, my dad would raise his hand and swear to the Parent’s Pledge to support “independence in design and production of all competition devices.”
After a couple miscalculations and rebuilds in an attempt to fall within the design specifications of the rules, we finished our build. Our final product was a Frankenstein’s monster assembly of balsa wood, circuit boards, hobby drive shafts and purely decorative flame stickers. Meredith and I stayed long hours after school running the car down the hallway after carefully marking distances on the floor with masking tape. We learned the car’s behavior until we could stop at any distance within millimeters and we knew the exact angle to position it at the start in order to account for drift. Yet when the vehicle stopped working in the middle of a run, I shrugged even though I’d been there during the entire construction. I took the car home to my dad, who diagnosed it after a brief glance. A circuit component had come lose, probably in one of our locker collisions. He re-soldered the wire to the connection, sending me back to school the next day with a repaired vehicle that I hesitated to call my own.
Robotics competitions are prevalent around the world and take many forms: unmanned robots race each other, play soccer, Sumo-wrestle each other out of a ring, destroy each other, navigate an obstacle course, solve a maze, joust, extinguish a fire and find a simulated baby, perform aerial stunts, climb a wall, play ping-pong, perform an underwater mission, or in the case of the UAV Outback Challenge, aerial robots must autonomously find a “lost” dummy called Outback Joe hidden in a three by three mile wide area, report the coordinates of his position and deliver a medical package to that position without breaking it. The competitions thrive on stringent and sometimes torturous rule systems, hoping to encourage innovation of design through constraint. Some require autonomy, while others, like BattleBots, have human controllers behind the robot action.
The purpose of these competitions is to educate and inspire enthusiasm for technological research, especially in younger competitors, as well as provide a chance for collaboration in the hopes that the competitive teamwork will generate more innovation. The coordinators of RoboCup, a competition where autonomous robots play soccer against each other, hope that their robots will be able to beat a human soccer team by 2050—a rather optimistic goal considering the halting, lost-duckling robots that are currently competing. It tests several programming skills, including mapping of a visual field, bipedal balance, locating a ball and moving through different terrain. The gimmick of the soccer field is entertaining—fun to watch and fun to compete in—and RoboCup gives roboticists short term goals to meet in place of the longevity of robotics research as a whole. But like all competitions, robot tournaments bring up the question of whether antagonism and time pressure bring about the best results, or just pit engineers and programmers against each other. Despite that concern, they’ve been an effective supplement and integral part of robotics programs, while also getting amateur builders involved in the sport and innovation behind it.
Robot combat competitions did not begin in robotics programs, but at a sci-fi and fantasy convention. The Denver Mad Scientist’s Club held a free-for-all robot competition with only 11 rules, presiding over the event in their uniforms: white labcoats and hard hats. The winning bot spit silly string at opponents, as well as the audience. Other nerd and geek communities picked up the sport, bringing it to DragonCon in the form of Robot Battles. Marc Thorpe, who worked with Lucasfilms, organized Robot Wars in San Francisco, and the BBC picked up the concept for a television show of the same name, launched in 1998. The show was a success, drawing over six million viewers. Robot Wars was part technological testing bed, part game show and part stand-up act, with hosts like comedian Jeremy Clarkson poking fun at robots and their owners. Rules were established partly to maintain safety, but mostly to keep the competition fierce and entertaining.
When BattleBots brought the British show to the US, the rules were kept for the same reasons (minus the allowance for flamethrowers). And soon enough academia caught on to the model, perhaps in response to students who entered their robotics programs looking for the excitement of Robot Wars and instead catching the hunched shoulders and glazed eyes of endless programming. They needed an application. They wanted to combine the robot combat hobbyist’s passion with the cult audience member’s dedication. If that involved blowing shit up, so be it.
But the potential of robot sport that haunts me the most is that when robot battling and technological progress are combined, warfare is only one more step away. And taking that step would drastically alter the way that we engage in worldwide conflict. Some doubt a future where autonomous robots fight our wars for us, where skirmishes and conflicts depend solely on engineering intellect and programming knowledge. Robot battling is at least entertainment, at most a way to test the technological capabilities of our weaponry. It is a means to an end when the final goal has not yet been established, when the application could be ten or twenty years ahead of our time. The problem with short-term goals is that long-term consequences can be defocused or ignored. Drone strikes and weaponry can be directly connected to developments made in robotics programs, yet are rarely addressed directly, a fact that makes robot competitions seem less innocuous.
For now, the end goal of robotics remains autonomy, even as the term is debated and redefined. When we reach the point where we have to ask whether or not there should be a human decision maker hovering over the kill switch, then we seem to be heading toward complete autonomy all too quickly. There’s no guarantee that, under the same circumstances, a robot will make a better decision than a human—though it does help to think that at one point in this not-so theoretical future, a human will still be able to say no to a robot’s kill suggestion. Robot films aside, the reverse could also be true, a robot saying no to our kill command or a human pressing a kill switch in spite of the AI’s suggestion. There’s no way to know if our robot warfare future will cause more or less deaths, more or less tragedy, more or less suffering. In situations like this, or even in less weighty scenarios like the self-driving car currently in development, I am still attempting to figure out whether autonomy conflicts with control. What do we lose when we reach the point of the singularity, and what do we gain? Autonomy ultimately seems to lie somewhere in the gap between robots and humanity, between ourselves and our cognitively superior future selves, a gap that science is poised to close just as enthusiastically as some of us fear the result.
Reboot: Round 1
In 2015, back by popular demand, BattleBots returned on ABC. The change from a niche time slot on Comedy Central to a decent summer slot came with an added panel of announcers, new reality-show-style “backstories” on the builders and additional padding of post-battle interviews. But the producers still targeted long-time viewers by bringing back favorite builders from previous seasons and showing clips from the original series. For the first season trial of the reboot, the BattleBots powers-that-be collapsed all the weight categories into one and removed long-standing rules that prohibited flamethrowers and mini bots.
In the second episode, Ghost Raptor is matched with Complete Control, both teams led by BattleBots regulars with their own cultivated personalities. Chuck Pitzer, leader of the Ghost Raptor team, is a rule-oriented builder focused on a fair fight. Complete Control’s team leader, Derek Young, became known as a villain in the early seasons of the original show, notorious for using whatever methods necessary to win.
Before the match, the fight looks fairly one-sided. Ghost Raptor’s spinner mechanism seems like it will easily overtake Complete Control’s more defensive design, which includes a flipper for knocking opponents away and a biting mechanism for crushing their frames. Chuck Pitzer’s team is decked out in gold lamé sweat suits, complete with a disco ball on a stick, while the Ghost Raptor team is simply wearing uniform black t-shirts and jeans, the professional choice for the BattleBots world. But instead of walking out of the arena when announcements were over, Chuck Pitzer leans down and positions a large square present between Complete Control’s jaws. The announcers go crazy.
“There’s a note! What’s the note say?” The camera zoomed in. “To Ghost Raptor: Our deepest condolences.”
The match begins and Ghost Raptor covers the distance between the two competitors in seconds, its bar spinner humming. It closes in on Complete Control and rips into the present, sending wrapping paper and cardboard flying. Confusion reigns as the announcers attempt to figure out what happened and audience members begin to boo. Inside the present was a net that was now completely ensnared in Ghost Raptor’s spinner mechanism, preventing any movement beyond a brief retreat. The net loops around Ghost Raptor’s wheels in the escape, stranding the bot as well.
“No entanglement devices!” Derek Young yells, and the ref on Chuck Pitzer’s side is saying the same thing. When the shot cuts to Chuck, he’s grinning mischievously.
“The rules don’t say anything about entanglement devices. No ball bearings and no fishing line. Nothing about entanglements.”
A tense meeting between Derek and Chuck follows, probably encouraged by the producers, who are milking the controversial move for air time. Their conversation ends briefly, a hardened Derek walking away with a sigh, saying, “well, it wouldn’t be the way I would choose to win.”
ABC cuts away to clips from other battles as the judges deliberate. The other teams hear about the issue and rush to find nets for their own bots in case the officials decided to allow entanglements. Coverage continued with a rematch—apparently the final decision is that the contestants must start over because nets “aren’t in the spirit of BattleBots.” The result was a tense battle that had to be decided by score rather than knockout. Ghost Raptor’s blade snapped off in the first twenty seconds, so both bots vied to push the other into obstacles. The winner was able to wedge the other bot and slam it into the wall of the arena: Ghost Raptor. A victory for robotics sportsmanship, that strange confluence of a desire to enjoy the sport for its own sake, an insistence on engineering ethics, and a respect for innovation over the easy solution.
On the day of my first Science Olympiad competition, Meredith and I turned our creation over to the Electric Vehicle officials to be checked for regulation dimensions and assembly. A last minute uncertainty over whether the toothpick indicator we hot-glued to the front counted toward the maximum length was quickly assuaged when the judges returned the car to us, passing inspection. The competition area was in a hallway of the University of North Carolina at Greensboro’s biology building, where judges from the engineering department had meticulously laid out a course for the electric vehicles, marking intervals with tape. The judge at the start line was equipped with a stopwatch and another stood waiting at the end with a millimeter measure. We were given a distance: 5½ meters. Meredith and I exchanged concerned glances as the team in front of us plugged a laptop into their vehicle, pulling up a program and plugging in values. My dad, who was waiting with us, looked closely and told me that they were using Lego Mindstorm software. I looked down nervously at the car in my hand that housed a toy truck motor and showcased matching flame stickers. But their advanced equipment turned out to be a shortcoming: they went over their allotted set-up time dealing with a malfunctioning program. I couldn’t help but smile when their car stopped half a meter short of the target distance after a slow crawl. Though our car wasn’t as reliable or customizable, we knew its quirks and could work with the drifts and jerks. And it was fast.
At the start of our run, Meredith and I safety goggled up and went to work as my dad watched from the spectator area. I set the distance by rotating the front wheel 14 ¾ turns, checked the chassis one last time and set our vehicle down. Meredith crossed in front and carefully aligned the toothpick at the start line, angling to account for the unexplainable drift we encountered in practice. Like we’d done time after time, Meredith paced out in front, judging the alignment. I checked from the other side and moved the back wheels right another millimeter. We told the judge we were ready and I used a pencil, number 2, to activate the switch and complete the circuit that sent our vehicle forward. I held my breath.
Our car lurched, jolting my nerves, and rolled on in a slightly curved route. Tapping the trigger pencil against my thigh, I waited until the car came to a stop. Despite our correction, it had still drifted quite a bit toward the right. But the car had stopped. Two millimeters from the target. I overheard the judge on the end mutter “nice.” Meredith and I shared a smile, and I wiped away the condensation gathering around the edges of my safety goggles. We still had another run, and the best of the two would be chosen for our final score. But this was our electric vehicle’s best performance yet. The flame stickers on the sides seemed to have activated. My dad, who had been watching the event for a while before we got there, said he thought our chances were good—he hadn’t seen any stops as close as ours so far.
Later that day, after I finished competing in my other two events, our whole team packed into a lecture hall with students from schools across the district, awaiting the results. I saw Meredith a few rows down, and she caught my eye and gave me a thumbs-up. The awards ceremony dragged on, and Electric Vehicle was one of the last to be announced. My teammates beamed around me, some draped with multiple medals, as I tapped my feet anxiously under my chair and growled at the teams defying the hold-your-applause command to scream for their winning members.
Finally the judges drawled out my category, and I sat forward as they moved from fifth place up. The simultaneous dread and hope of the scrolling places increased as they reached third: not me. Then second: not me. The announcer paused and then—Morehead High School—my school, my team and my bot. Meredith and I maneuvered past students seated in the aisles, received our blue-ribboned medals engraved with miniature microscopes and shook congratulatory hands. I returned to my seat to see my dad beaming behind me, and I posed with a medal as he took my picture.
“Good job,” he said, and then, more than ever, I felt science-fair redeemed.
When I return home and look at that medal, which still hangs on a shelf in my childhood bedroom, I consider both my pride that day and my confusion about my role in it all now. So many times in the build I felt like Galileo’s assistant, wondering why I was standing holding this lantern on a hill on the outskirts of Florence, shuttering and unshuttering it repeatedly as Galileo attempted to count faster than the speed of light. I held balsa together without being able to see the final construction in my mind, helped solder circuit components without knowing how they worked, and misunderstood the specifications outlined in the rules.
But I also consider the hours I spent in our high school hallway, stomach pressed to the floor as I tried to calculate the perfect angle that would offset the drift caused by a slight wobble in our CD wheels. I have to factor in my own efforts—the questions I asked about design that, even though they were borne out of a lack of understanding, sometimes led us to a solution or redesign. The trouble is somewhere along the spectrum between collaboration and cheating, and it’s impossible for me to place my mark on that line precisely. I can’t parse my own contributions, and I’m losing faith in any claims of complete independence.
During the height of behavioral psychology, the opponents of the theory fought for free will. Behaviorism theorists like B.F. Skinner stated that human beings were only the result of response to stimulus, meaning that there was no such thing as true choice. Opponents argued that this was a mistaken idea: choice separated humans from our close mammal relations. As behaviorism waned, cognitive theory, which stated that the human brain is like a computer, came into prominence. Instead of testing the button-pressing skills of pigeons, researchers evaluated programs, mapping human responses within if-then statements. Herbert Simon and Allen Newell shifted from working on AI to developing an early cognitive theory that paved the way for the field. The mind, they stated, is a system in which complexity indicates an unexplored subsystem rather than a dark, unapproachable area our understanding has yet to reach. According to Simon and Newell, all behavior could be pared down to predictable symbology, neatly fitting into means-ends analysis. The two received the Turing Award on a curtained stage in their bowties and beards, accepting an award named for the man who developed a systematic test for determining whether AI has surpassed human intelligence.
Robotics and AI spiral tightly around each other, and their respective engineers collaborate yet are constantly at odds. The Harvard Biodesign Lab is engineering “soft” robots, made of inflatable muscles instead of bare steel parts. Mechanical skeletons are covered with pliable skin that registers touch. Robots are becoming increasingly like us, and we are responsible. As we effectively design ourselves, we must understand how we operate on increasing levels of complexity.
One of the challenges researchers run into as we attempt to map the brain and design AI that mirrors it, is the “emergent” properties of our synapses. This is the idea that our neurons produce properties that aren’t detectable on the level of the individual neuron cell—that our brains are greater than the sum of our parts. Some argue that a true Brain Activity Map is impossible because of this—while we can measure individual neurons in isolation, we have no way of using that information to understand the whole. What’s worse is that the brain receives a variety of external cues that are also a part of the process, so it’s difficult to study the brain outside of particular contexts and draw conclusions that reach beyond individual environments. We can study general activity, measuring the electrical signals crossing the gray matter, but the results vary so much from person to person.
Writing for Scientific American, Partha Mitra opposes the emergent properties idea, saying that it reflects a spirituality that’s been removed from science—the idea that there’s something that transcends our physical bodies, like a soul. There would have to be a way for physical matter to defy the first law of thermodynamics in order for our physical neurons to become non-physical emergent properties. Mitra moves away from this supposed problem and reframes the conversation surrounding the project by asking, if the BAM isn’t feasible, is it at least meaningful?
So I ask the same question of the “emergent” hypothesis— if it’s not a feasible hypothesis, is it at least a meaningful metaphor? I consider the final product of one of my first scientific collaborations, and find that a tangle of neurons becoming a thought is reflective of the tangle between my father’s influence and my own agency, a tangle that’s become increasingly interwoven in my memory. At the very least, it’s useful to think that we can’t fully separate our own influence from the contributions of others, that the myth of the single Inventor who creates lightbulbs and printing presses isn’t reflective of the interconnectedness of all the individuals, research, materials and time that went into each piece of each great Printing Press. Adding artificial intelligence and robotics to that web doesn’t result in the end of our vesica piscis or a melodramatic halt to human life as we know it—we instead have the opportunity to realize how interconnected, emergent and collaborative our cognition has always been.
When considering whether free will truly existed, or was a human illusion, William James stated—inspired by a diary entry of Charles Renouvier, who was inspired by Immanuel Kant, who reaches back and debates with Hobbes, Locke, Hume, Spinoza, Aquinas, and Descartes—that his “first act of free will” would be “to believe in free will.” So I will stand on the back of the infinite turtle stack of philosophers here, and say that my first act of an autonomy that complements rather than contradicting collaboration is to believe in it.
Megan Mericle is an MFA candidate in creative nonfiction and writing instructor at the University of Alaska at Fairbanks. She is the web and design editor for Permafrost. Her work seeks to connect the humanities to the sciences through personal exploration of quantum physics, psychology and science fiction.