Meet Cassie, a sleek bi-pedal robot made by Agility Robotics
Two Cassies decide to take a walking tour of our office. No CG: 100% actual robots.
More Here
HV. Self-replicating artificial intelligence program named Dorothy.
Galerians: Ash (2002) PS2
I, Robot (2004)
Polish priest blessing a newly opened “Bitcoin embassy.” Warsaw, 2014.
Virtual Genetic Code
TESS_ERACT
“You think I’m scared of death? I’ve done it a million times, and I’m fucking great at it.”
give em the ol’ razzle dazzle
Graphics research from Daniel Sýkora et al at DCGI, Czech Republic, presents a method of realtime style transfer focused on human faces, similar to their previous StyLit work.
It should be noted that the video below is a demonstration of results, is silent, and the official paper has not been made public yet:
Results video for the paper: Fišer et al.: Example-Based Synthesis of Stylized Facial Animations, to appear in ACM Transactions on Graphics 36(4):155, SIGGRAPH 2017.
[EDIT: 20 July 2017]
An official video (no audio) and project page has been made public on the project:
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject’s identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Link