Graphics research from Daniel Sýkora et al at DCGI, Czech Republic, presents a method of realtime style transfer focused on human faces, similar to their previous StyLit work.
It should be noted that the video below is a demonstration of results, is silent, and the official paper has not been made public yet:
Results video for the paper: Fišer et al.: Example-Based Synthesis of Stylized Facial Animations, to appear in ACM Transactions on Graphics 36(4):155, SIGGRAPH 2017.
[EDIT: 20 July 2017]
An official video (no audio) and project page has been made public on the project:
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject’s identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Link
Verizon Cancels Elderly Woman’s Service on Her 84th Birthday http://ift.tt/2vePM37
Augmented reality sandbox - move the land around and it shows off the topography and sea/water level.
“Of course machines can’t think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something thinks differently from you, does that mean it’s not thinking ?”
- The Imitation Game
Web demo by Stefan Hedman adds elastic physics to classic Super Mario level, currently in pre-alpha. You need a keyboard (play with arrow keys) - to start, move towards the right side of the screen:
[video from Robert McGregor]
Play around with it yourself here
When ur in public and have to pretend not to be anxious
Ever wished you could have a few more arms to get stuff done? Researchers at the University of Tokyo have developed Metalimbs. They’re strap-on robotic arms controlled by the lower body.
follow @the-future-now
Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
More Here
THOPTER_02
Your order is processing 💿 for Super Deluxe