What do you think of when the word robot is mentioned? Is it still the Terminator, or a Transformer? We tend to think of the most spectacular machines first, imaginary though they may be, and then move towards the actual, ie. industrial robots. Yet, it appears as though what comes to mind is beginning to change. For example, a few weeks ago Stephen Colbert did a segment on real robots that we should fear (and if that isn’t convincing click here). I’m guessing that this change is happening for a couple of reasons. First, through advancements, and a growing community of makers, the gap between what can be imagined and what can be made is closing. Secondly, all the curious machines that rarely would have entered the public eye prior to online video, are now going viral. Here are a few that come to mind…
I’ve decided to change the name of the blog from “The Making of ADB” to “Machines for Social Circumstances”. This also reflects certain changes in content. For the past year I’ve been documenting my efforts to build a social robot. The primary utility has been to keep track of my activities and my ideas as I go through the making process. In the beginning my concerns were primarily technical. It took a lot of work to get this thing working in a basic way. Now that I’ve shown it once, the project has started getting a little attention here and there. The effect of this has been to get me thinking again about why the hell anyone should care. And so as I’ve started to develop these thoughts again, I’ve decided to generalize a bit, branch out a little from the one project. Funnily enough, in doing so, I’ve adopted a name for the blog which was used for a set of drawings developed some years ago: Machines for Social Circumstances. The premise for the drawings and now for the blog, is to explore the design of machines that fill social voids. Okay, for the most part this won’t mean a significant change. I’ll still be talking about ADB mostly, as its the project I’m working on, but also I’ll talk more generally about social robots, and I’ll likely start dissing Twitter a bit. We’ll see where it goes. BTW my 4 year old PC is crashing in a bad way today, and so I’m working on my roommate’s Mac. I both adore its smoothness and robustness, and yet detest its closed glossiness and price. Is it time I accept the new electronic landscape and get one?
At this stage I want to improve the software so that the robot has a greater variety of dynamic behaviours. I’ve described the old software previously, but just to recap there was only one behaviour, trying to get close to the person. It worked by each module assessing whether it had the ability to increase contact the number of modules that made skin contact and if so then the module would turn until contact was made. That required each module to speak to neighbours, and the model for this is represented in the following diagram:
Taking that decentralized model as a starting point, the robot will now try different behaviours depending on stimulus. It’s speed, force, attraction, and persistence will all be affected by stimulus, and may result in various affects such as soft snuggling, or hard repulsion, or other things in between. In trying to figure out how to model the various possibilities, the only style I could think of was a Decision Tree borrowed from Game Theory. It seems to do the trick, but I wonder if this is really the best way to approach this. Seems to work anyway…
See the first node marked Robot Trust? That is a persistence factor. If the bulk of the interaction has been unsafe, as measured by User Valence (the hardness of their touch), and by the speed of the interaction, then the robot will attempt to look after itself more, and be less inclined to pay attention to the user, whereas if the user is slow and gentle in their handling then over time the robot may become more playful, experimental, and reciprocal in its interaction. Again the important thing is that the idea of trust is rooted in instrumental utility rather than in anthropomorphic imitation. Trust seems to be the only quality that needs explaining, speed, valence (which is really the hardness of touch), and attraction (trying to make contact) are all intimately tied to mechanisms.
I’ll be giving a lightening talk at Business 3.0: How The Future of Technology Will Change Business on October 6th.
Also, Ian Garrick Mason wrote a short summary of ADB on the blog of Scope. You can read it here. Scope looks to be an interesting new magazine dealing with the cross-section of culture, science, politics, and business. Good luck.
Ian’s summary features a short video interview, which I should have posted a while ago, and so without further ado…(but still go check it out at Scope).
So I seem to be getting back into ADB again. Usually I take a long hiatus from a project after a major show, on the order of 6 months or more. I haven’t done that this time. It’s been less than two months since SIGGRAPH and in the time that’s passed I’ve written a paper about the project which took a couple of weeks, and I tried some documentation, which only turned out so so. So it has been relatively slow on the making front. It was at least 5 weeks until I even cracked open the shipping case. A show like that took so much out of me that it’s hard to get back into the zone. Here’s the strange thing. I had been intending to submit documentation of the robot to the Japan Media Arts Festival, but didn’t get it finished in time. That was a bit disappointing, but also relieving. I don’t want to submit the robot for any more opportunities until it is a bit further developed, such that its behavior fully fits the description. Here’s the latest description:
After Deep Blue, or ADB for short, is a snake-like, modular robot designed for haptic interactions with people (see figure 1). It writhes, wriggles, twists and squeezes in response to how it is held and touched. It can be used to explore intimate and emotional relationships with technology through direct physical contact. ADB adapts to, and reciprocates the energy you put into it though your body. When touched, it comes to life. When stroked, it seeks more contact. And soon, when it is harmed, it will defend or try to get away.
It’s that “soon it will” part that I have to change into a “now it does”. In fact, I’m not entirely sure it even does what it is supposed to at this point, which is seeking more contact. It kind of does that, but also seems to have a mind of its own in its movements.
So, I need to get control of the movements. To do this I have to learn whether the sensors are being falsely triggered, and if so then what is causing the triggering and find a solutinon. Alternatively, if the sensors are triggering fine then it is the software which is not dealing with the stimulus correctly. I already know that to some degree it is both of those problems. A module’s sensors respond to proximity of other modules, but fortunately not as much as a touch. So I should be able to differentiate between those phenomena (as long as that holds true when many are connected together). As for the software, right now the modules are told to go to discreet positions, whereas what I want is for them to turn until they make contact with skin.
In addition to these immediate issues, I also want to make some changes to the circuit, adding a power ORing feature that switches between batteries and an external plug. Will also reduce the number of send pins for cap sensing, (change the cap sensing circuitry altogether if necessary), implement the gyro, use a faster baud rate for motor communication, see if I can get batch programming of all modules to work. Lots of other things as well. Started scratching the surface this week. More to come.
Not much happening with ADB for next while as I attend to all the things that were missed while working on it for the past few months, such as prepping for a new year of teaching. Will hopefully have a description of the project that I finally like coming out soon, and of course will post about that when it’s released. For now here’s a recent pic…