Game developed by Glen Chiaccchieri where players lose life bar when opponent’s feet is hit with a laser from a pointer, and is a proof-of-concept implementation of the computing concept ‘Hypercard in the Room’:
In the video above two people are playing Laser Socks, a game I invented in an afternoon using a research programming system, common household items, and a couple lines of code.
Players try to point a laser pointer at their opponent’s socks while dodging their opponent’s laser. Whenever they score a hit, the health meter closest to their opponent’s play area fills up with blue light. Whoever gets their opponent’s meter to fill up first wins.
In August 2015, my research group (The Communications Design Group or CDG) had a game jam — an event where participants create games together over the course of a few days. The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects. The point of the jam was to see what playful things you could make with this kind of system. We ended up making more than a dozen new and diverse games.
I made Laser Socks, a game about jumping around and shooting a laser pointer at an opponent’s feet. It was fun, ridiculous, and simple to make. In some ways, Laser Socks became one of the highlight demonstrations of what could be done if there was a medium of expression that integrated dynamic computational elements into the physical world.
More Here
Collaboration between WHITEvoid, Kinetic Lights and director Zhang Yimou incorporates a mechanical array of arrangeable lamps to provide context to a dance performance:
Chinese director Zhang Yimou who is best known for his movies “Raise the Red Lantern”, “Hero” and “The Great Wall” but also for directing the opening and closing ceremonies of the Beijing Olympics returns to the theater stage with his concept perfomance “2047 APOLOGUE”.
Zhang Yimou has unveiled his latest work at the National Center for the Performing Arts in Beijing. Based on Peking Opera “Sanchakou,” “2047 Apologue” is breaking the form of traditional stage plays, combining Chinese folk art with the latest technology. The show aims to mirror reality, commenting on how science and technology are a huge part of life in the 21st century. The show consists of 8 parts, each combining a traditional chinese craft, music or dance style with modern high tech such as lasers, robots, drones and kinetics.
WHITEvoid was commissioned to create, program and direct the kinetic display for the last part of the show called “Weaving Machine”. The 9 minute performance features 640 motorized LED spheres, an anchient chinese weaving machine and a modern dancer. German motor winch producer KINETIC LIGHTS provided the vertical hoist systems for the LED spheres and control software. Russian RADUGADESIGN animated a complementing video backdrop and CPG Concepts from Hong Kong provided the dance choreography for british dancer Rose Alice.
Link
Second part to Nat & Friends look at Virtual Reality tech, this time focused on how it can be a creative platform, including Tiltbrush, 360 video and Blocks:
In this video (part 2 of a two-part VR series) I explore VR creativity tools and how artists and creators are using them. I do a Tilt Brush chicken dance, play with brains, and help YouTuber Vanessa Hill make a video about how your mind reacts to VR
More Here
Part One can be found here
K-2SO will be there for you in augmented reality. Visit www.starwars.com/k2andme to find out how.
HOVER BONES
Plus check out Glitch Black’s music on Bandcamp!
SP. 114 - Ghost in the Shell (2017)
Repairing the robotic hand.
Project from Zach Levine modifies a Furby toy into an Amazon-powered home assistant with Alexa voice recognition:
I’m Sorry …
I thought I’d make Furby a little less annoying and give him an upgrade with a real brain. Step aside, Tin Man.
By combining a Raspberry Pi Zero W (a tiny computer), Amazon’s (mostly) open-source Alexa Voice Service software, and a few other electrical components, I converted my normal Furby into an Amazon Echo.
I give you: Furlexa.
It is my hope that you can either use this guide as a fun read or to build your own Furby Echo – I tried to write it in a style that any crowd would enjoy and I hope that I accomplished that goal.
More details on how to build your own can be found here
When Felix “PewDiePie” Kjellberg, YouTube’s most lucrative, popular superstar, uploaded a video featuring a banner with the words “Death to all Jews,” along with a man dressed as Jesus saying, “Hitler did absolutely nothing wrong,” he insisted it was jokes made in bad taste.
After losing his partnership with Disney, Kjellberg apologized, saying he was just poking fun at the “modern world.”
But attempts to distance himself from his message didn’t deter the so-called “alt-right” from accepting him as one of their own, nor did Kjellberg’s insistence that he wanted nothing to do with them.
Kjellberg may not support them, but in the few short months since his anti-Semitism scandal, far-right celebrities have become Kjellberg’s favorite new bedfellows. Read more (7/26/17)
follow @the-future-now
Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.
There is an online demo which will let you upload an image to convert and even save as a 3D model here
Link