Writeup

Solving Process

The r25 looks generally O.K. in terms of expression with the gestures I tried to create. The software runs terribly on my computer for some reason, it wouldn't install properly on my Windows, Ubuntu, or Kali OS Drives on my machine. Once I finally got to work on creating the moves for the robot, the software is rather intuitive with joint movements, since all I had to do is create graph points for the joints to interpolate between. For each gesture I only needed about 7-12 joints to be moved with about 5-10 points for interpolation per joint.

The animations were generally not too difficult to produce and didn't cause many problems for me. The only problems I had were that I wanted the smile animation to have a wink and a pitch in its head to the right, but the eyes are connected and the head only has roll and yaw.

In my opinion, the most effective gesture is the one for 1.b, the uncanny valley gesture. This is due to the fact that the robot is still very robotic and not human enough to trick us, so the movements look very jerky and not organic. The other gestures are decent, and the survey results show that. However, the uncanny valley one is most effective in looking creepy due to the uncanny valley, and can be shown in the survey results (still robot and creepy) for the robot itself, which is rather low.

Survey Results
Josh Cho - 1.b , Creepy
Anthropomorphism
3 2 4 4 2
Kai Yue - 1.c Disapproving
Anthropomorphism
4 5 4 5 5
Clare Svirsko - Still r25
Anthropomorphism
3 1 2 2 3

Question 1

1) Limitation of joints to reduce cost and size
Lowering the amount of joints allows the robot to be created more simply and in a smaller shell. This allows the r25 to be sold at a lower price and thus be more generally accessible to people. However, there's a pretty big price that's paid in its ability to display human-like emotion. In trying to make the smile, I wanted to cock the head a bit to the side and have the robot wink, but there's no pitch on the head and the eyelids are connected which prevents the winking. The hands don't have independently moving fingers, so handshaking or hand-waving is rather difficult. These limitations cause problems on the other serious attempts at displaying emotions (The creepy version is made easy due to these limitations) which lower the robot;s ability to properly interact with humans.
2) Focus of human parts is on face, rest is robotic
This effect isn't that bad since most of our focus on other people is on the face anyway. I (and I'm sure others) can just imagine the robotic body as a suit on a normal human body that we can't see. The face itself doesn't try to be extremely human-like since it has a more cartoon type feel due to lack of wrinkles and super detailed features. It seems to fall more on the left side of the uncanny valley, so if it became slightly more detailed, it would start to look creepier. However, as it stands, the face is human-like enough, but not very awkward or creepy, which makes it comfortable to look at, program, and interact with.
3) Human-like speech
The speech isn't all that great, especially on the avatar function. It sounds like a slightly improved version of Microsoft Bob, which is something I would expect in a computerized voice at this point anyway, so it isn't very uncomfortable. Most people have probably experienced this type of voice at some point and also have come to expect it in a computer, so the voice wouldn't lower its efficacy in interaction expectations too much, but it wouldn't get too far in trying to impress. It's not completely limited, but showing emotion through voice is something it isn't built to do well, limiting its interaction ability in this regard.

Question 2

1) Size
The Geminoids completely human in regards to their size; HI-4 is 180cm and F is 165 cm, which are both normal sizes for male and females respectively. Interacting with this in terms of standing or sitting next to would feel completely natural, unlike,say, the r25 since one would have to put it on a table or hunch next to it. Assuming a relatively good amount of joints, handshaking, hugging, waving, etc., would seem very comfortable to the size of these robots.

2)Tactile Sensors

Instead of having vision do all the work, these robots have tactile sensing capabilities. Simply interactions such as a handshake to introduce ourselves to the technology would likely be improved and easily made due to this inclusion of hardware. These sensors would also help with personal boundaries such as backing off from someone it's too close to, or simulating a response of fear or hesitation of being touched by a human and then moving away. The r25's only senses are through software messages and vision (perhaps torque in the joints, but I haven't seen evidence of this), which limits what it can sense what Humans try to communicate to it. The feeling of interaction can be greatly deepened by the Geminoids' being able to sense our touch.

3) Silicone skin and Urethane mesh

The features of the faces of the Geminoids are a bit uncanny and thus feel slightly uncomfortable in themselves, however, the skin used to cover the robot looks extremely believable. The only criticism I feel that I can make about the skin is that it might look generally dry all over, but besides that it looks very much like my own skin. This differs greatly from the r25 in that the r25 looks very cartoon-like and glossy, let alone that only the head looks skinned, it's readily apparent that the skin on the r25 isn’t human at all. The impact of this skin is that people would probably feel much better about talking and facing a robot that looks a lot more like them.

Question 3

1) It would help to have more degrees of freedom all around to help express emotions better and physically interact and touch people, such with handshakes and hand waving. I've talked about this in the beginning and the issues with this would be an increase in just about everything: power, size, software complexity, and cost. Shrugging would be much easier with an increase in joints, since right now I would have to just move the arm forward to simulate some sort of pseudo-shrug, which should be a simple gesture. More joints would allow for a greater range of expression and connection with Humans.

2) Depth sensing would help with people occlusion and recognizing personal boundaries. Instead of making vision software take up a lot of computational power and time, depth sensing would allow the r25 to easily detect who is closer to him and who is in front of whom. The r25 would them be able to face and communicate and gesture to the correct human, or back off from people who are too close to it.