Continuing from my previous post, a little FYI …
You can download your model and upload it to @sketchfab
The example above was created using this current Tumblr Radar image from @made
On Aug. 21, 2017, a solar eclipse will be visible in North America. Throughout the continent, the Moon will cover part – or all – of the Sun’s super-bright face for part of the day.
Since it’s never safe to look at the partially eclipsed or uneclipsed Sun, everyone who plans to watch the eclipse needs a plan to watch it safely. One of the easiest ways to watch an eclipse is solar viewing glasses – but there are a few things to check to make sure your glasses are safe:
Glasses should have an ISO 12312-2 certification
They should also have the manufacturer’s name and address, and you can check if the manufacturer has been verified by the American Astronomical Society
Make sure they have no scratches or damage
To use solar viewing glasses, make sure you put them on before looking up at the Sun, and look away before you remove them. Proper solar viewing glasses are extremely dark, and the landscape around you will be totally black when you put them on – all you should see is the Sun (and maybe some types of extremely bright lights if you have them nearby).
Never use solar viewing glasses while looking through a telescope, binoculars, camera viewfinder, or any other optical device. The concentrated solar rays will damage the filter and enter your eyes, causing serious injury. But you can use solar viewing glasses on top of your regular eyeglasses, if you use them!
If you don’t have solar viewing glasses, there are still ways to watch, like making your own pinhole projector. You can make a handheld box projector with just a few simple supplies – or simply hold any object with a small hole (like a piece of cardstock with a pinhole, or even a colander) above a piece of paper on the ground to project tiny images of the Sun.
Of course, you can also watch the entire eclipse online with us. Tune into nasa.gov/eclipselive starting at noon ET on Aug. 21!
For people in the path of totality, there will be a few brief moments when it is safe to look directly at the eclipse. Only once the Moon has completely covered the Sun and there is no light shining through is it safe to look at the eclipse. Make sure you put your eclipse glasses back on or return to indirect viewing before the first flash of sunlight appears around the Moon’s edge.
You can look up the length of the total eclipse in your area to help you set a time for the appropriate length of time. Remember – this only applies to people within the path of totality.
Everyone else will need to use eclipse glasses or indirect viewing throughout the entire eclipse!
Whether you’re an amateur photographer or a selfie master, try out these tips for photographing the eclipse.
#1 — Safety first: Make sure you have the required solar filter to protect your camera.
#2 — Any camera is a good camera, whether it’s a high-end DSLR or a camera phone – a good eye and vision for the image you want to create is most important.
#3 — Look up, down, and all around. As the Moon slips in front of the Sun, the landscape will be bathed in long shadows, creating eerie lighting across the landscape. Light filtering through the overlapping leaves of trees, which creates natural pinholes, will also project mini eclipse replicas on the ground. Everywhere you can point your camera can yield exceptional imagery, so be sure to compose some wide-angle photos that can capture your eclipse experience.
#4 — Practice: Be sure you know the capabilities of your camera before Eclipse Day. Most cameras, and even many camera phones, have adjustable exposures, which can help you darken or lighten your image during the tricky eclipse lighting. Make sure you know how to manually focus the camera for crisp shots.
#5 —Upload your eclipse images to NASA’s Eclipse Flickr Gallery and relive the eclipse through other peoples’ images.
Learn all about the Aug. 21 eclipse at eclipse2017.nasa.gov, and follow @NASASun on Twitter and NASA Sun Science on Facebook for more. Watch the eclipse through the eyes of NASA at nasa.gov/eclipselive starting at 12 PM ET on Aug. 21.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
A.I. Artificial Intelligence - 2001 Steven Spielberg
Thanks Graylock
Our newest communications satellite, named the Tracking and Data Relay Satellite-M or TDRS-M, launches Aug. 18 aboard an Atlas V rocket from our Kennedy Space Center in Florida. It will be the 13th TDRS satellite and will replenish the fleet of satellites supporting the Space Network, which provides nearly continuous global communications services to more than 40 of our missions.
Communicating from space wasn’t always so easy. During our third attempt to land on the moon in 1970, the Apollo 13 crew had to abort their mission when the spacecraft’s oxygen tank suddenly exploded and destroyed much of the essential equipment onboard. Made famous in the movie ‘Apollo 13’ by Ron Howard and starring Tom Hanks, our NASA engineers on the ground talked to the crew and fixed the issue. Back in 1970 our ground crew could only communicate with their ground teams for 15 percent of their orbit – adding yet another challenge to the crew. Thankfully, our Apollo 13 astronauts survived and safely returned to Earth.
Now, our astronauts don’t have to worry about being disconnected from their teams! With the creation of the TDRS program in 1973, space communications coverage increased rapidly from 15 percent coverage to 85 percent coverage. And as we’ve continued to add TDRS spacecraft, coverage zoomed to over 98 percent!
TDRS is a fleet of satellites that beam data from low-Earth-orbiting space missions to scientists on the ground. These data range from cool galaxy images from the Hubble Space Telescope to high-def videos from astronauts on the International Space Station! TDRS is operated by our Space Network, and it is thanks to these hardworking engineers and scientists that we can continuously advance our knowledge about the universe!
What’s up next in space comm? Only the coolest stuff ever! LASER BEAMS. Our scientists are creating ways to communicate space data from missions through lasers, which have the ability to transfer more data per minute than typical radio-frequency systems. Both radio-frequency and laser comm systems send data at the speed of light, but with laser comm’s ability to send more data at a time through infrared waves, we can receive more information and further our knowledge of space.
How are we initiating laser comm? Our Laser Communications Relay Demonstration is launching in 2019! We’re only two short years away from beaming space data through lasers! This laser communications demo is the next step to strengthen this technology, which uses less power and takes up less space on a spacecraft, leaving more power and room for science instruments.
Watch the TDRS launch live online at 8:03 a.m. EDT on Aug. 18: https://www.nasa.gov/nasalive
Join the conversation on Twitter: @NASA_TDRS and @NASALasercomm!
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Yep. That was quick. In a certain way.
Dominik Koller is starting workshops to learn how to use creative coding platform which is used by many professionals in the interactive field starting this summer in Berlin - no experience required, only your own laptop:
We are launching our first ever full vvvv course in August 2017.
In eight weekly sessions, this course provides you with a strong foundation for using creative technology and building interactive interfaces.
No previous knowledge needed.
Each week, we will focus on a topic:
2x vvvv basics
Sound reactive visuals
2x Projection Mapping
Motion Tracking: Kinect
Arduino and Electronics
3D and Virtual Reality
More Here
Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
More Here
Today, we’re celebrating the Red Planet! Since our first close-up picture of Mars in 1965, spacecraft voyages to the Red Planet have revealed a world strangely familiar, yet different enough to challenge our perceptions of what makes a planet work.
You’d think Mars would be easier to understand. Like Earth, Mars has polar ice caps and clouds in its atmosphere, seasonal weather patterns, volcanoes, canyons and other recognizable features. However, conditions on Mars vary wildly from what we know on our own planet.
Viking Landers
Our Viking Project found a place in history when it became the first U.S. mission to land a spacecraft safely on the surface of Mars and return images of the surface. Two identical spacecraft, each consisting of a lander and an orbiter, were built. Each orbiter-lander pair flew together and entered Mars orbit; the landers then separated and descended to the planet’s surface.
Besides taking photographs and collecting other science data, the two landers conducted three biology experiments designed to look for possible signs of life.
Pathfinder Rover
In 1997, Pathfinder was the first-ever robotic rover to land on the surface of Mars. It was designed as a technology demonstration of a new way to deliver an instrumented lander to the surface of a planet. Mars Pathfinder used an innovative method of directly entering the Martian atmosphere, assisted by a parachute to slow its descent and a giant system of airbags to cushion the impact.
Pathfinder not only accomplished its goal but also returned an unprecedented amount of data and outlived its primary design life.
Spirit and Opportunity
In January 2004, two robotic geologists named Spirit and Opportunity landed on opposite sides of the Red Planet. With far greater mobility than the 1997 Mars Pathfinder rover, these robotic explorers have trekked for miles across the Martian surface, conducting field geology and making atmospheric observations. Carrying identical, sophisticated sets of science instruments, both rovers have found evidence of ancient Martian environments where intermittently wet and habitable conditions existed.
Both missions exceeded their planned 90-day mission lifetimes by many years. Spirit lasted 20 times longer than its original design until its final communication to Earth on March 22, 2010. Opportunity continues to operate more than a decade after launch.
Mars Reconnaissance Orbiter
Our Mars Reconnaissance Orbiter left Earth in 2005 on a search for evidence that water persisted on the surface of Mars for a long period of time. While other Mars missions have shown that water flowed across the surface in Mars’ history, it remained a mystery whether water was ever around long enough to provide a habitat for life.
In addition to using the rover to study Mars, we’re using data and imagery from this mission to survey possible future human landing sites on the Red Planet.
Curiosity
The Curiosity rover is the largest and most capable rover ever sent to Mars. It launched November 26, 2011 and landed on Mars on Aug. 5, 2012. Curiosity set out to answer the question: Did Mars ever have the right environmental conditions to support small life forms called microbes?
Early in its mission, Curiosity’s scientific tools found chemical and mineral evidence of past habitable environments on Mars. It continues to explore the rock record from a time when Mars could have been home to microbial life.
Space Launch System Rocket
We’re currently building the world’s most powerful rocket, the Space Launch System (SLS). When completed, this rocket will enable astronauts to begin their journey to explore destinations far into the solar system, including Mars.
Orion Spacecraft
The Orion spacecraft will sit atop the Space Launch System rocket as it launches humans deeper into space than ever before. Orion will serve as the exploration vehicle that will carry the crew to space, provide emergency abort capability, sustain the crew during the space travel and provide safe re-entry from deep space return velocities.
Mars 2020
The Mars 2020 rover mission takes the next step in exploration of the Red Planet by not only seeking signs of habitable conditions in the ancient past, but also searching for signs of past microbial life itself.
The Mars 2020 rover introduces a drill that can collect core samples of the most promising rocks and soils and set them aside in a “cache” on the surface of Mars. The mission will also test a method for producing oxygen from the Martian atmosphere, identify other resources (such as subsurface water), improve landing techniques and characterize weather, dust and other potential environmental conditions that could affect future astronauts living and working on the Red Planet.
For decades, we’ve sent orbiters, landers and rovers, dramatically increasing our knowledge about the Red Planet and paving the way for future human explorers. Mars is the next tangible frontier for human exploration, and it’s an achievable goal. There are challenges to pioneering Mars, but we know they are solvable.
To discover more about Mars exploration, visit: https://www.nasa.gov/topics/journeytomars/index.html
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Japanese programmer has unveiled proof-of-concept effects for Augmented Reality game made with ARKit including visual filters and Predator-like optical camouflage:
ミッション1【野良アンドロイド(光学迷彩搭載機)の発見・確保】 #ARKit pic.twitter.com/7m0esEGrUt
— kidachi (@kidach1) August 19, 2017
[Bing Translation:] Mission 1 [Nora Android (optical camouflage aircraft) find & secure] #ARKit
You can follow Kidachi on Twitter here
Computer Vision research from Jiajun Lu, Hussein Sibai and Evan Fabry examines blocking neural network object detection using what appears to look like DeepDream-esque camouflage:
An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.
The paper can be found here