İntel yaptığı robotik çalışma ile piyasada ben varım diyor.. 😊
#robot #robotics #robotik #automation #otomasyon #endüstriyel #endüstri #sanayi #industrial #design #tasarım #teknoloji #technology #tech #mechatronica #amazing #nice #successful #mekatronik #makine #electronics #world #energy #project #programming #control #kontrol #intel #robots
(x)
More than 400 U.S. school districts are using augmented reality to teach students. Is AR the future of education?
follow @the-future-now
Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.
There is an online demo which will let you upload an image to convert and even save as a 3D model here
Link
https://vimeo.com/175247441
Developers @lingoded and @JesseBarksdale have been sharing on Twitter demos of Japanese-Style RPGs using iOS ARKit and Unity:
Little info is currently known about the project, but it is clear @lingoded is part of an AR gaming project called GeneReal so this is possibly part of it.
SP. Gynoid (Fembot)
A.I. Artificial Intelligence (2001)
Project from Google Creative Lab is an open source physical interface for their NSynth project, which generates news sounds using Machine Learning to understand them:
Building upon past research in this field, Magenta created NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.
Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.
Since the release of NSynth, Magenta have continued to experiment with different musical interfaces and tools to make the output of the NSynth algorithm more easily accessible and playable.
Using NSynth Super, musicians have the ability to explore more than 100,000 sounds generated with the NSynth algorithm.
More Here
How To Buy Bitcoins In India | A Step-By-Step Guide Find more Bitcoin mining rig reviews: http://bitcoinist.net
SP. “Free your mind.” The Matrix (1999)
Game developed by Glen Chiaccchieri where players lose life bar when opponent’s feet is hit with a laser from a pointer, and is a proof-of-concept implementation of the computing concept ‘Hypercard in the Room’:
In the video above two people are playing Laser Socks, a game I invented in an afternoon using a research programming system, common household items, and a couple lines of code.
Players try to point a laser pointer at their opponent’s socks while dodging their opponent’s laser. Whenever they score a hit, the health meter closest to their opponent’s play area fills up with blue light. Whoever gets their opponent’s meter to fill up first wins.
In August 2015, my research group (The Communications Design Group or CDG) had a game jam — an event where participants create games together over the course of a few days. The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects. The point of the jam was to see what playful things you could make with this kind of system. We ended up making more than a dozen new and diverse games.
I made Laser Socks, a game about jumping around and shooting a laser pointer at an opponent’s feet. It was fun, ridiculous, and simple to make. In some ways, Laser Socks became one of the highlight demonstrations of what could be done if there was a medium of expression that integrated dynamic computational elements into the physical world.
More Here