Piano player wears an eye tracker so you can see exactly where their eyes move to as they play. Amazing video.
1. Our upcoming James Webb Space Telescope will act like a powerful time machine – because it will capture light that’s been traveling across space for as long as 13.5 billion years, when the first stars and galaxies were formed out of the darkness of the early universe.
2. Webb will be able to see infrared light. This is light that is just outside the visible spectrum, and just outside of what we can see with our human eyes.
3. Webb’s unprecedented sensitivity to infrared light will help astronomers to compare the faintest, earliest galaxies to today’s grand spirals and ellipticals, helping us to understand how galaxies assemble over billions of years.
Hubble’s infrared look at the Horsehead Nebula. Credit: NASA/ESA/Hubble Heritage Team
4. Webb will be able to see right through and into massive clouds of dust that are opaque to visible-light observatories like the Hubble Space Telescope. Inside those clouds are where stars and planetary systems are born.
5. In addition to seeing things inside our own solar system, Webb will tell us more about the atmospheres of planets orbiting other stars, and perhaps even find the building blocks of life elsewhere in the universe.
Credit: Northrop Grumman
6. Webb will orbit the Sun a million miles away from Earth, at the place called the second Lagrange point. (L2 is four times further away than the moon!)
7. To preserve Webb’s heat sensitive vision, it has a ‘sunshield’ that’s the size of a tennis court; it gives the telescope the equivalent of SPF protection of 1 million! The sunshield also reduces the temperature between the hot and cold side of the spacecraft by almost 600 degrees Fahrenheit.
8. Webb’s 18-segment primary mirror is over 6 times bigger in area than Hubble’s and will be ~100x more powerful. (How big is it? 6.5 meters in diameter.)
9. Webb’s 18 primary mirror segments can each be individually adjusted to work as one massive mirror. They’re covered with a golf ball’s worth of gold, which optimizes them for reflecting infrared light (the coating is so thin that a human hair is 1,000 times thicker!).
10. Webb will be so sensitive, it could detect the heat signature of a bumblebee at the distance of the moon, and can see details the size of a US penny at the distance of about 40 km.
BONUS! Over 1,200 scientists, engineers and technicians from 14 countries (and more than 27 U.S. states) have taken part in designing and building Webb. The entire project is a joint mission between NASA and the European and Canadian Space Agencies. The telescope part of the observatory was assembled in the world’s largest cleanroom at our Goddard Space Flight Center in Maryland.
Webb is currently being tested at our Johnson Space Flight Center in Houston, TX.
Afterwards, the telescope will travel to Northrop Grumman to be mated with the spacecraft and undergo final testing. Once complete, Webb will be packed up and be transported via boat to its launch site in French Guiana, where a European Space Agency Ariane 5 rocket will take it into space.
Learn more about the James Webb Space Telescope HERE, or follow the mission on Facebook, Twitter and Instagram.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.
Silicon valley entrepreneur and novelist Rob Reid takes on artificial intelligence — and how it might end the world — in his weird, funny techno-philosophical thriller, After On.
Critic Jason Sheehan says, “It’s like an extended philosophy seminar run by a dozen insane Cold War heads-of-station, three millenial COOs and that guy you went to college with who always had the best weed but never did his laundry.”
‘After On’ Sees The End Of The World In A Dating App
Consiste en una carga de aire comprimido que se libera para “apretar” la rueda delantera contra el suelo cuando se detecta una pérdida de agarre en el tren delantero.
Ho Chi Minh City, Vietnam. #Bitcoin via @kyletorpey
If you could talk to your childhood inspiration, what would you say?
Listen to Brian Lehrer’s full interview with LeVar Burton here.
Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
More Here
Project from NIck Nelson applies Neural Network learning to playing Mario Kart 64 with successful results:
This is NOT a human playing the game, it is in fact the program I wrote. It is a special kind of machine learning that models biological evolution to evolve “species” to find the optimal solution to the problem. In this case the problem is Mario Kart 64! This run is the result of about two days of training.
Code for the project can be found here
Apple Patents for Automatic 3D Avatar Creation and Emotional States
Something to expect in the future in regards to online identity (both of which were filed today):
A three-dimensional (“3D”) avatar can be automatically created that resembles the physical appearance of an individual captured in one or more input images or video frames. The avatar can be further customized by the individual in an editing environment and used in various applications, including but not limited to gaming, social networking and video conferencing.
I wonder if this will be connected to Apple’s purchase of depth sensor company Primesense [Link to patent file]
Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defines using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated.
[Link to patent file]
Game developed by Glen Chiaccchieri where players lose life bar when opponent’s feet is hit with a laser from a pointer, and is a proof-of-concept implementation of the computing concept ‘Hypercard in the Room’:
In the video above two people are playing Laser Socks, a game I invented in an afternoon using a research programming system, common household items, and a couple lines of code.
Players try to point a laser pointer at their opponent’s socks while dodging their opponent’s laser. Whenever they score a hit, the health meter closest to their opponent’s play area fills up with blue light. Whoever gets their opponent’s meter to fill up first wins.
In August 2015, my research group (The Communications Design Group or CDG) had a game jam — an event where participants create games together over the course of a few days. The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects. The point of the jam was to see what playful things you could make with this kind of system. We ended up making more than a dozen new and diverse games.
I made Laser Socks, a game about jumping around and shooting a laser pointer at an opponent’s feet. It was fun, ridiculous, and simple to make. In some ways, Laser Socks became one of the highlight demonstrations of what could be done if there was a medium of expression that integrated dynamic computational elements into the physical world.
More Here
Abstract Poster Inspired by Ghost in the Shell