Had an interesting chat with an AI today
This is actually the 2nd Machine learning program!
SP. Augmented Reality. Psycho-Pass (2012)
Meet the updated version of SpotMini, the robot dog by Boston Dynamics. Anyone in need of a new pet? | #djiphantom4 #djiglobal #uav #yuneec #hexacopter #djiinspire1 #quadcopter #miniquad #ironman #robotics #robot #skynet #fpv #drones #aerialphotography #octocopter #robots #djiphantom #arduino #dronepilot #drone #tesla #elonmusk #rcplane #spacex #sparkfun #nasa #raspberrypi #mavicpro via @bostondynamics (at Boston, Massachusetts)
Polish priest blessing a newly opened “Bitcoin embassy.” Warsaw, 2014.
Research from Carnegie Mellon Textiles Lab have put forward a framework to turn 3D model file into a physical knitted object:
We present the first computational approach that can transform 3D meshes, created by traditional modeling programs, directly into instructions for a computer-controlled knitting machine. Knitting machines are able to robustly and repeatably form knitted 3D surfaces from yarn, but have many constraints on what they can fabricate. Given user-defined starting and ending points on an input mesh, our system incrementally builds a helix-free, quad-dominant mesh with uniform edge lengths, runs a tracing procedure over this mesh to generate a knitting path, and schedules the knitting instructions for this path in a way that is compatible with machine constraints. We demonstrate our approach on a wide range of 3D meshes.
More Here
Apple have just published an example for developers on how to use their front facing camera on the iPhone X for AR apps:
This sample app presents a simple interface allowing you to choose between four augmented reality (AR) visualizations on devices with a TrueDepth front-facing camera (see iOS Device Compatibility Reference).
The camera view alone, without any AR content.
The face mesh provided by ARKit, with automatic estimation of the real-world directional lighting environment.
Virtual 3D content that appears to attach to (and be obscured by parts of) the user’s real face.
A simple robot character whose facial expression is animated to match that of the user.
Link
An intro video can be found here
😂
3D Printing A Fabulous Lion [x]
SP. 114 - Ghost in the Shell (2017)
Repairing the robotic hand.
Installation from Shawn Hunt and Microsoft Vancouver combines 3D Printing, robotics, Hololens Mixed Reality and indigenous symbolism:
The Raven, the ultimate trickster, has become a cyborg. In this Creative Collab, Shawn Hunt moves away from engaging with the handmade; exploring authenticity and our expectations of what it means to be indigenous through the removal of the hand-carved surface. The work Transformation Mask, features Microsoft HoloLens, creating an experiential sculpture piece that engages with mixed reality.
In this work, the mask appropriates the traditional aspects of metamorphosis with the transformation from bird mask to human, yet in this adaptation the human mask has been altered, upgraded, and merged with the machine. Incorporating aspects of technology, sound and space, each part of the work reflects Hunt’s interest in how we understand and identify with the term indigenous.
This work presents a new trajectory for engagement and exploration of First Nations practice; one that points towards technology and innovation as aspects that expand traditional practices and open new avenues for interpretation.
More Here
Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
More Here