It’s been well over a year since I’ve updated the blog. This blog was great while I was working on the second version of adb, and was invaluable for observing my own work flow. Since then, I’ve been busy working on a version of the robot called SnakeThing which I’m intending to release as a toy. If you’d like some more info on that, want to get in touch, or sign up for my newsletter, please visit: steddyrobots.com
I wrote an article recently for a new magazine called Scope. It outlines many of the ideas underlying the design of my robots. Please check it out. You can download the entire first issue from their site here (mirrored here). I also recommend the other articles and their blog: http://www.scope-mag.com. Their approach is great. Judging from the Editor’s comments at the beginning of the first issue, they are interested to see what distinguishes this century from others.
If I haven’t said it before, here it is…RobotShop is great. Good prices, and Great service. And I do mean great. During the summer, I bought some stuff from them. The manufacturer packed some wrong extra doohickeys with the motors. I complained to Robotshop and promptly forgot about it. As far as I’m concerned they have a monopoly on certain parts, so I’d have to go back to them. Besides it wasn’t their mistake, but the manufacturers, who said that the doohickeys were no longer included. Anyways, though I forgot, RobotShop didn’t. They kept on the case, and some weeks later sent me the missing parts. Furthermore, that order was a special rush delivery they did for me. I desperately needed those motors, but they were out of stock, and rushes weren’t normally available on those parts. They arranged it anyways. The long and the short of it is that Robotshop is alright.
In lieu of any new writing at the moment here is an old video. This is the Blanket Project as shown on a Japanese Game Show called Sekai Gyoten News. The premise of the show is that the guests who I think are celebrities, are given some clues and have to solve the mystery. The mystery is embedded in video segments which tell a story. So here’s the story behind the Blanket Project. There is a teenage boy who is peeping tom. He likes to stare at his young, hot neighbour through her window. When she shuts the blinds at night the boy will turn his attention to other neighbours like this little kid who kicks off his covers every night. On one of these occasions he notices the little kid is all covered up. So the big mystery is how is it that the kid no longer has the problem? The answer of course is that he has acquired a robotic blanket. I love that the invasion of privacy is the context to the mystery that goes without questioning. It was a pretty outrageous experience in all. I would love to make it to Japan at some point. I think the machines would go over well there. Anyways, the whole video is about 10 minutes, here’s a short excerpt.
Abler is a think-space about art, design and adaptive technologies. Here you’ll find links to artwork that may be thought of as “adaptive” in both explicit and implicit ways: work that uses tools, instruments, proxies, cyborgs, or other extensive machinery to augment the way we live. I’m interested in adaptation for practical purposes—creating greater options for the literal challenges that disabilities present. I’m also interested in adaptation in a metaphorical or speculative sense: tools, real or imagined, that make visible the less-apparent, but no less human, challenges we encounter. – Sara Hendren
The two terms go together like peanut butter and chocolate. Funnily, they also get to the heart of my interest in robots. A philosophical zombie is the idea that were we able to fabricate an entity that was materially identical to a person that it would nonetheless not be a person because it would lack a spiritual essence, which people have. Believers in Strong AI hold that artificial entities should be able to reproduce any human ability and even outperform us. Furthermore, they hold that it is not even necessary for the entity to be materially identical, rather it just needs to have sufficient mechanisms to carry out the process. Conveniently, computers seem to have these mechanisms and as such we should be able to manufacture a human-equivalent. I’m not entirely sure it’s possible (it’s certainly frickin’ difficult). For sure, machines have accurately replicated some abilities, and when they do, they replace us at those tasks (eg. in the workforce). Yet an accurate description of our motivations as living things continues to elude us. (I think Capitalism itself as it is expressed through the stock market would probably combust if we could figure out these motivations.) I imagine it is possible in theory, but what appears to be happening in practice is rather an inter-meshing of people with machines through networks. They augment our capacities, and fill in gaps, while we continue to provide them with direction. If this trend continues then I’d expect that eventually all of our important capacities will be reproduced by machines, and that organisms will simply provide motivations. But not people necessarily. Maybe people at first, but it would eventually be more efficient to use simple cellular organisms, a kind-of motivational battery. Who knows, a few researchers could simply crack artificial motivation one day; figure out the algorithm and boom there we go, self-deterministic, autonomous, even creative machines. I also recognize that it’s extremely difficult to make predictions about the future of our lifestyles that hold true more than a few months away, because we inhabit a complex, seemingly chaotic system. Still, there are ways we can investigate these ideas in the present, based on current technology. In my work I’m essentially trying to reproduce some sentience-like abilities in a machine and then position the artificial entity next to a human counterpart and watch what unfolds.
Olafur Eliasson’s artwork is striking for how it frames the entire perceptual field as a canvas upon which the artist can act. I’m glad to see him offer up this map showing how he thinks emotions operate within us, and within a larger social context.
Considering he is an artist I’d be curious to know how he relates shapes and colors to the ideas and specifically to the text. His depiction follows the Euler-diagram model, which is used in color-theory, which Eliasson draws heavily on for his art. Some of his works seem to place the viewer within a Euler-diagram.
Now compare his diagram with the Decision Tree I posted last week showing how I’m going to model feelings in my robot. On one hand it is more precise (not necessarily accurate), and on the other simplistic and clunky in its depiction of the relations between emotional components. But because of its narrow scope, it is practical to reproduce, a prerequisite for the exacting technology I use. There are inspiring ideas in Eliasson’s map, but also missing details. That I suppose is the benefit of visual arts. It’s my job to figure out how something like that could actually be reproduced.