https://vimeo.com/175247441
This story is already doing the rounds but is still very interesting - Machine Learning research from Georgia Tech manages to clone game design from a video recording.
The top GIF is the reconstructed clone, the bottom gif is from the video recording:
Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.
Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.
To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”
Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.
More Here
HCI research from Media Interaction Lab and Google is a proof of concept interface using elastic textiles which are simple to produce:
StretchEBand are stitch-based elastic sensors, which have the benefit of being manufacturable with textile craft tools that have been used in homes for centuries. We contribute to the understanding of stitch-based stretch sensors through four experiments and one user study that investigate conductive yarns from textile and technical perspectives, and analyze the impact of different stitch types and parameters. The insights informed our design of new stretch-based interaction techniques that emphasize eyes-free or causal interactions. We demonstrate with StretchEBand how soft, continuous sensors can be rapidly fabricated with different parameters and capabilities to support interaction with a wide range of performance requirements across wearables, mobile devices, clothing, furniture, and toys.
More Here
Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.
There is an online demo which will let you upload an image to convert and even save as a 3D model here
Link
Motion capture- you never know when I may need to do one of The Rock’s Baywatch stunts/ better safe than sorry.
Continuing from my previous post, a little FYI …
You can download your model and upload it to @sketchfab
The example above was created using this current Tumblr Radar image from @made
So, our physics teacher has the strange idea of motivating his students by letting each of us present a physical phenomenal we find interesting to our classmates in a 5-minutes-presentation. And now I need something that is interesting for everyone - even people that usually don't care for physics -, but has interesting facts for someone who's interested in it, too (preferably with an easy experiment). You don't happen to have any ideas, do you?
First of all, your professor is awesome for taking the time to do this. Of the top of my mind, the best one I have is Chladni figures.
Basically take a flat metal plate, fix it at the center and spray some fine sand particles on it.
Using a violin bow, gently excite any edge of the plate to magically witness these beautiful normal mode patterns ( known as Chladni patterns/figures ) forming on the plate.
Also notice that by pinching the plate at different points, the pattern obtained changes.
There is a whole lot of physics that goes behind such a simple phenomenon and I dare say we understand it completely. There are lots of questions on these figures that we have no answer for!
Hope this helps with your presentation. Have a good one!
Gif source video: Steve Mould
Timelapse of Star Trails over Sparks Lake, Oregon
Sunflowers or Gears Turning Stimboard for anon
Sources: (x) (x) (x) (x) (x) (x) (x) (x) (x)
Project from Fernando Ramallo is a drawing and animation tool for Unity with simple interfaces to create assets for games and interactive experiences, a bit like Flash but in 2.5D:
DOODLE STUDIO 95 is a FUN drawing and animation tool for Unity.
Doodle an animation without leaving the Editor and turn your drawings into sprites, UI elements, particles or textures, with a single click.
Draw inside the Unity Editor
Easy presets for backgrounds, characters and UI elements
Example scenes with 2.5D characters, foliage, speech bubbles and transitions, with reusable scripts
Draw and animate inside the Scene View (beta)
Shadow-casting shaders
Don’t think about materials or image formats, it Just Works.
Five Symmetry modes
Record mode adds frames as you draw
Record a sound with a single click! Boop!
Easy API for using animations with scripts
Convert to sprite sheets or GIFs
…and more
You can find out more here, and even try out a browser-based interactive tour here