It’s been well over a year since I’ve updated the blog. This blog was great while I was working on the second version of adb, and was invaluable for observing my own work flow. Since then, I’ve been busy working on a version of the robot called SnakeThing which I’m intending to release as a toy. If you’d like some more info on that, want to get in touch, or sign up for my newsletter, please visit: steddyrobots.com
I wrote an article recently for a new magazine called Scope. It outlines many of the ideas underlying the design of my robots. Please check it out. You can download the entire first issue from their site here (mirrored here). I also recommend the other articles and their blog: http://www.scope-mag.com. Their approach is great. Judging from the Editor’s comments at the beginning of the first issue, they are interested to see what distinguishes this century from others.
If I haven’t said it before, here it is…RobotShop is great. Good prices, and Great service. And I do mean great. During the summer, I bought some stuff from them. The manufacturer packed some wrong extra doohickeys with the motors. I complained to Robotshop and promptly forgot about it. As far as I’m concerned they have a monopoly on certain parts, so I’d have to go back to them. Besides it wasn’t their mistake, but the manufacturers, who said that the doohickeys were no longer included. Anyways, though I forgot, RobotShop didn’t. They kept on the case, and some weeks later sent me the missing parts. Furthermore, that order was a special rush delivery they did for me. I desperately needed those motors, but they were out of stock, and rushes weren’t normally available on those parts. They arranged it anyways. The long and the short of it is that Robotshop is alright.
In lieu of any new writing at the moment here is an old video. This is the Blanket Project as shown on a Japanese Game Show called Sekai Gyoten News. The premise of the show is that the guests who I think are celebrities, are given some clues and have to solve the mystery. The mystery is embedded in video segments which tell a story. So here’s the story behind the Blanket Project. There is a teenage boy who is peeping tom. He likes to stare at his young, hot neighbour through her window. When she shuts the blinds at night the boy will turn his attention to other neighbours like this little kid who kicks off his covers every night. On one of these occasions he notices the little kid is all covered up. So the big mystery is how is it that the kid no longer has the problem? The answer of course is that he has acquired a robotic blanket. I love that the invasion of privacy is the context to the mystery that goes without questioning. It was a pretty outrageous experience in all. I would love to make it to Japan at some point. I think the machines would go over well there. Anyways, the whole video is about 10 minutes, here’s a short excerpt.
Abler is a think-space about art, design and adaptive technologies. Here you’ll find links to artwork that may be thought of as “adaptive” in both explicit and implicit ways: work that uses tools, instruments, proxies, cyborgs, or other extensive machinery to augment the way we live. I’m interested in adaptation for practical purposes—creating greater options for the literal challenges that disabilities present. I’m also interested in adaptation in a metaphorical or speculative sense: tools, real or imagined, that make visible the less-apparent, but no less human, challenges we encounter. – Sara Hendren
The two terms go together like peanut butter and chocolate. Funnily, they also get to the heart of my interest in robots. A philosophical zombie is the idea that were we able to fabricate an entity that was materially identical to a person that it would nonetheless not be a person because it would lack a spiritual essence, which people have. Believers in Strong AI hold that artificial entities should be able to reproduce any human ability and even outperform us. Furthermore, they hold that it is not even necessary for the entity to be materially identical, rather it just needs to have sufficient mechanisms to carry out the process. Conveniently, computers seem to have these mechanisms and as such we should be able to manufacture a human-equivalent. I’m not entirely sure it’s possible (it’s certainly frickin’ difficult). For sure, machines have accurately replicated some abilities, and when they do, they replace us at those tasks (eg. in the workforce). Yet an accurate description of our motivations as living things continues to elude us. (I think Capitalism itself as it is expressed through the stock market would probably combust if we could figure out these motivations.) I imagine it is possible in theory, but what appears to be happening in practice is rather an inter-meshing of people with machines through networks. They augment our capacities, and fill in gaps, while we continue to provide them with direction. If this trend continues then I’d expect that eventually all of our important capacities will be reproduced by machines, and that organisms will simply provide motivations. But not people necessarily. Maybe people at first, but it would eventually be more efficient to use simple cellular organisms, a kind-of motivational battery. Who knows, a few researchers could simply crack artificial motivation one day; figure out the algorithm and boom there we go, self-deterministic, autonomous, even creative machines. I also recognize that it’s extremely difficult to make predictions about the future of our lifestyles that hold true more than a few months away, because we inhabit a complex, seemingly chaotic system. Still, there are ways we can investigate these ideas in the present, based on current technology. In my work I’m essentially trying to reproduce some sentience-like abilities in a machine and then position the artificial entity next to a human counterpart and watch what unfolds.
Olafur Eliasson’s artwork is striking for how it frames the entire perceptual field as a canvas upon which the artist can act. I’m glad to see him offer up this map showing how he thinks emotions operate within us, and within a larger social context.
Considering he is an artist I’d be curious to know how he relates shapes and colors to the ideas and specifically to the text. His depiction follows the Euler-diagram model, which is used in color-theory, which Eliasson draws heavily on for his art. Some of his works seem to place the viewer within a Euler-diagram.
Now compare his diagram with the Decision Tree I posted last week showing how I’m going to model feelings in my robot. On one hand it is more precise (not necessarily accurate), and on the other simplistic and clunky in its depiction of the relations between emotional components. But because of its narrow scope, it is practical to reproduce, a prerequisite for the exacting technology I use. There are inspiring ideas in Eliasson’s map, but also missing details. That I suppose is the benefit of visual arts. It’s my job to figure out how something like that could actually be reproduced.
What do you think of when the word robot is mentioned? Is it still the Terminator, or a Transformer? We tend to think of the most spectacular machines first, imaginary though they may be, and then move towards the actual, ie. industrial robots. Yet, it appears as though what comes to mind is beginning to change. For example, a few weeks ago Stephen Colbert did a segment on real robots that we should fear (and if that isn’t convincing click here). I’m guessing that this change is happening for a couple of reasons. First, through advancements, and a growing community of makers, the gap between what can be imagined and what can be made is closing. Secondly, all the curious machines that rarely would have entered the public eye prior to online video, are now going viral. Here are a few that come to mind…
I’ve decided to change the name of the blog from “The Making of ADB” to “Machines for Social Circumstances”. This also reflects certain changes in content. For the past year I’ve been documenting my efforts to build a social robot. The primary utility has been to keep track of my activities and my ideas as I go through the making process. In the beginning my concerns were primarily technical. It took a lot of work to get this thing working in a basic way. Now that I’ve shown it once, the project has started getting a little attention here and there. The effect of this has been to get me thinking again about why the hell anyone should care. And so as I’ve started to develop these thoughts again, I’ve decided to generalize a bit, branch out a little from the one project. Funnily enough, in doing so, I’ve adopted a name for the blog which was used for a set of drawings developed some years ago: Machines for Social Circumstances. The premise for the drawings and now for the blog, is to explore the design of machines that fill social voids. Okay, for the most part this won’t mean a significant change. I’ll still be talking about ADB mostly, as its the project I’m working on, but also I’ll talk more generally about social robots, and I’ll likely start dissing Twitter a bit. We’ll see where it goes. BTW my 4 year old PC is crashing in a bad way today, and so I’m working on my roommate’s Mac. I both adore its smoothness and robustness, and yet detest its closed glossiness and price. Is it time I accept the new electronic landscape and get one?
At this stage I want to improve the software so that the robot has a greater variety of dynamic behaviours. I’ve described the old software previously, but just to recap there was only one behaviour, trying to get close to the person. It worked by each module assessing whether it had the ability to increase contact the number of modules that made skin contact and if so then the module would turn until contact was made. That required each module to speak to neighbours, and the model for this is represented in the following diagram:
Taking that decentralized model as a starting point, the robot will now try different behaviours depending on stimulus. It’s speed, force, attraction, and persistence will all be affected by stimulus, and may result in various affects such as soft snuggling, or hard repulsion, or other things in between. In trying to figure out how to model the various possibilities, the only style I could think of was a Decision Tree borrowed from Game Theory. It seems to do the trick, but I wonder if this is really the best way to approach this. Seems to work anyway…
See the first node marked Robot Trust? That is a persistence factor. If the bulk of the interaction has been unsafe, as measured by User Valence (the hardness of their touch), and by the speed of the interaction, then the robot will attempt to look after itself more, and be less inclined to pay attention to the user, whereas if the user is slow and gentle in their handling then over time the robot may become more playful, experimental, and reciprocal in its interaction. Again the important thing is that the idea of trust is rooted in instrumental utility rather than in anthropomorphic imitation. Trust seems to be the only quality that needs explaining, speed, valence (which is really the hardness of touch), and attraction (trying to make contact) are all intimately tied to mechanisms.