SP. Artificial intelligence software calculates the best pose for selling the product and demands it from the model.
Looker (1981)
This story is already doing the rounds but is still very interesting - Machine Learning research from Georgia Tech manages to clone game design from a video recording.
The top GIF is the reconstructed clone, the bottom gif is from the video recording:
Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.
Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.
To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”
Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.
More Here
Another smart AR experiment from Zach Lieberman proving Augmented Reality is an interesting creative platform: this one visualizes audio as it is recording and plays back as you follow the path both forwards and backwards:
A post shared by zach lieberman (@zach.lieberman) on Sep 6, 2017 at 5:55am PDT
Quick test recording audio in space and playing back – (video has audio !) #openframeworks
Link
We worked on adding even more to the guns game! Here you can see guns stats displaying on UI, gun animations, and below, you can see enemies using some basic state engine based AI to hunt down and track the player!
继续阅读
I can not argue with that
Machine gun position on the German R-class Zeppelin ‘LZ 63’, 1916-17
via reddit
Webtoy by Andy Matuschak uses neural network-trained SketchRNN dataset to visualize in realtime potential sketch marks whilst you are drawing particular objects:
This pen’s ink stretches backwards into the past and forwards into possible futures. The two sides make a strange loop: the future ink influences how you draw, which in turn becomes the new “past” ink influencing further future ink.
Put another way: this is a realtime implementation of SketchRNN which predicts future strokes while you draw.
Currently works best in Chrome, you can try it out for yourself here
Artificial Intelligence learns to beat Mario like crazy.
THOPTER_02