Project from Fernando Ramallo is a drawing and animation tool for Unity with simple interfaces to create assets for games and interactive experiences, a bit like Flash but in 2.5D:
DOODLE STUDIO 95 is a FUN drawing and animation tool for Unity.
Doodle an animation without leaving the Editor and turn your drawings into sprites, UI elements, particles or textures, with a single click.
Draw inside the Unity Editor
Easy presets for backgrounds, characters and UI elements
Example scenes with 2.5D characters, foliage, speech bubbles and transitions, with reusable scripts
Draw and animate inside the Scene View (beta)
Shadow-casting shaders
Don’t think about materials or image formats, it Just Works.
Five Symmetry modes
Record mode adds frames as you draw
Record a sound with a single click! Boop!
Easy API for using animations with scripts
Convert to sprite sheets or GIFs
…and more
You can find out more here, and even try out a browser-based interactive tour here
Her name is Kavya Kopparapu and she’s a 16-year-old high school junior. She just might be a South Asian-American Bill Gates in the making.
Human 🧔🏽 vs. robot 🤖 | Our audience: #djiphantom4 #djiglobal #uav #3drobotics #djiinspire1 #quadcopter #miniquad #djiphantom3 #robotics #robot #aerialphotography #fpv #drones #hexacopter #octocopter #djiphantom #arduino #hobbyking #drone #multirotor #dronephotography #rcplane #spacex #sparkfun #adafruit #nasa #raspberrypi #mavicpro #skynet #blackmirror | Video by Boston Dynamics (at MIT School of Engineering)
Continuing from my previous post, a little FYI …
You can download your model and upload it to @sketchfab
The example above was created using this current Tumblr Radar image from @made
First underwater entanglement could lead to unhackable comms: A team of Chinese researchers has, for the first time, transmitted quantum entangled particles of light through water – the first step in using lasers to send underwater messages that are impossible to intercept. http://ift.tt/2vnLups
Research from Carnegie Mellon Textiles Lab have put forward a framework to turn 3D model file into a physical knitted object:
We present the first computational approach that can transform 3D meshes, created by traditional modeling programs, directly into instructions for a computer-controlled knitting machine. Knitting machines are able to robustly and repeatably form knitted 3D surfaces from yarn, but have many constraints on what they can fabricate. Given user-defined starting and ending points on an input mesh, our system incrementally builds a helix-free, quad-dominant mesh with uniform edge lengths, runs a tracing procedure over this mesh to generate a knitting path, and schedules the knitting instructions for this path in a way that is compatible with machine constraints. We demonstrate our approach on a wide range of 3D meshes.
More Here
Ever wished you could have a few more arms to get stuff done? Researchers at the University of Tokyo have developed Metalimbs. They’re strap-on robotic arms controlled by the lower body.
follow @the-future-now
🌿by javi.eats.and.runs on insta
Just a reminder for the community!
Run a Bitcoin Core 0.14.1 FullNode and support SegWit!
Support my electricity bill if you like: Bitcoin: 1FSZytTNZNqs69mSh5grU73DmrPVtBkz7m
Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.
There is an online demo which will let you upload an image to convert and even save as a 3D model here
Link