give em the ol’ razzle dazzle
Game developed by Glen Chiaccchieri where players lose life bar when opponent’s feet is hit with a laser from a pointer, and is a proof-of-concept implementation of the computing concept ‘Hypercard in the Room’:
In the video above two people are playing Laser Socks, a game I invented in an afternoon using a research programming system, common household items, and a couple lines of code.
Players try to point a laser pointer at their opponent’s socks while dodging their opponent’s laser. Whenever they score a hit, the health meter closest to their opponent’s play area fills up with blue light. Whoever gets their opponent’s meter to fill up first wins.
In August 2015, my research group (The Communications Design Group or CDG) had a game jam — an event where participants create games together over the course of a few days. The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects. The point of the jam was to see what playful things you could make with this kind of system. We ended up making more than a dozen new and diverse games.
I made Laser Socks, a game about jumping around and shooting a laser pointer at an opponent’s feet. It was fun, ridiculous, and simple to make. In some ways, Laser Socks became one of the highlight demonstrations of what could be done if there was a medium of expression that integrated dynamic computational elements into the physical world.
More Here
SP. Household robot calculates optimal move to win using artificial intelligence and augmented vision capabilities but does not tell anyone.
Bicentennial Man (1999)
In-development app from URCV turns an ARKit-enabled iPhone into a 3D scanner:
StructurePro combines the rich sensor data available from Apple’s ARKit with the 3d reconstruction capabilities of the industry leading mobile phone 3d reconstruction pipeline from URC Ventures. StucturePro enables software companies to build applications that can be used by construction workers, building inspectors, or insurance claims adjusters to successfully model buildings from iPhone imagery.
… By integrating the advanced sensor data from ARKit, the URC Ventures image processing pipeline is now able to successfully handle the extreme rotations introduced by average end users, textureless surfaces such as large solid color walls, and repetitive structures such as ceiling tiles.
More Here
Augmented Reality app from Nexus Studios is offers geolocation wayfinder service with a virtual guide in the form of a half-naked gentleman:
HotStepper is your first Augmented Reality sidekick to any destination on Earth. HotStepper features a confident dude who, when he’s not dancing, will walk you to any location you need to go. All you need to do is go outside, pick a destination on the map and then just follow him as he does his thing.
More Here
Ho Chi Minh City, Vietnam. #Bitcoin via @kyletorpey
Google Translate writes weird poetry if you repeat random characters.
(Above, my own experiments. Inspired by https://twitter.com/smutclyde )
Latest AR exhibition from MoMAR (who ran a guerilla show earlier this year) returns to the Pollock Room at MoMA New York featuring works by David Kraftsow, responsible for the YouTube Artififacts bot that regularly generates animated images from distorted videos:
Welcome to The Age of the Algorithm. A world in which automated processes are no longer simply tools at our disposal, but the single greatest omnipresent force currently shaping our world. For the most part, they remain unseen. Going about their business, mimicking human behavior and making decisions based on statistical analysis of what they ‘think’ is right. If the role of art in society is to incite reflection and ask questions about the state of our world, can algorithms be a part of determining and defining people’s artistic and cultural values? MoMAR presents a series of eight pieces created by David Kraftsow’s YouTube Artifact Bot.
More Here
DCGI and Adobe Research have put up an online interactive demo of their stylized facial animation paper.
Just drag and drop an image with a face into it, select one of the styles on the right, hit ‘Submit’ and see what happens …
Try it out for yourself here
This story is already doing the rounds but is still very interesting - Machine Learning research from Georgia Tech manages to clone game design from a video recording.
The top GIF is the reconstructed clone, the bottom gif is from the video recording:
Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.
Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.
To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”
Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.
More Here