The r25 looks generally O.K. in terms of expression with the gestures I tried to create. The software runs terribly on my computer for some reason, it wouldn't install properly on my Windows, Ubuntu, or Kali OS Drives on my machine. Once I finally got to work on creating the moves for the robot, the software is rather intuitive with joint movements, since all I had to do is create graph points for the joints to interpolate between. For each gesture I only needed about 7-12 joints to be moved with about 5-10 points for interpolation per joint.
The animations were generally not too difficult to produce and didn't cause many problems for me. The only problems I had were that I wanted the smile animation to have a wink and a pitch in its head to the right, but the eyes are connected and the head only has roll and yaw.
In my opinion, the most effective gesture is the one for 1.b, the uncanny valley gesture. This is due to the fact that the robot is still very robotic and not human enough to trick us, so the movements look very jerky and not organic. The other gestures are decent, and the survey results show that. However, the uncanny valley one is most effective in looking creepy due to the uncanny valley, and can be shown in the survey results (still robot and creepy) for the robot itself, which is rather low.
Instead of having vision do all the work, these robots have tactile sensing capabilities. Simply interactions such as a handshake to introduce ourselves to the technology would likely be improved and easily made due to this inclusion of hardware. These sensors would also help with personal boundaries such as backing off from someone it's too close to, or simulating a response of fear or hesitation of being touched by a human and then moving away. The r25's only senses are through software messages and vision (perhaps torque in the joints, but I haven't seen evidence of this), which limits what it can sense what Humans try to communicate to it. The feeling of interaction can be greatly deepened by the Geminoids' being able to sense our touch.
The features of the faces of the Geminoids are a bit uncanny and thus feel slightly uncomfortable in themselves, however, the skin used to cover the robot looks extremely believable. The only criticism I feel that I can make about the skin is that it might look generally dry all over, but besides that it looks very much like my own skin. This differs greatly from the r25 in that the r25 looks very cartoon-like and glossy, let alone that only the head looks skinned, it's readily apparent that the skin on the r25 isn’t human at all. The impact of this skin is that people would probably feel much better about talking and facing a robot that looks a lot more like them.
1) It would help to have more degrees of freedom all around to help express emotions better and physically interact and touch people, such with handshakes and hand waving. I've talked about this in the beginning and the issues with this would be an increase in just about everything: power, size, software complexity, and cost. Shrugging would be much easier with an increase in joints, since right now I would have to just move the arm forward to simulate some sort of pseudo-shrug, which should be a simple gesture. More joints would allow for a greater range of expression and connection with Humans.
2) Depth sensing would help with people occlusion and recognizing personal boundaries. Instead of making vision software take up a lot of computational power and time, depth sensing would allow the r25 to easily detect who is closer to him and who is in front of whom. The r25 would them be able to face and communicate and gesture to the correct human, or back off from people who are too close to it.