Our newest communications satellite, named the Tracking and Data Relay Satellite-M or TDRS-M, launches Aug. 18 aboard an Atlas V rocket from our Kennedy Space Center in Florida. It will be the 13th TDRS satellite and will replenish the fleet of satellites supporting the Space Network, which provides nearly continuous global communications services to more than 40 of our missions.
Communicating from space wasn’t always so easy. During our third attempt to land on the moon in 1970, the Apollo 13 crew had to abort their mission when the spacecraft’s oxygen tank suddenly exploded and destroyed much of the essential equipment onboard. Made famous in the movie ‘Apollo 13’ by Ron Howard and starring Tom Hanks, our NASA engineers on the ground talked to the crew and fixed the issue. Back in 1970 our ground crew could only communicate with their ground teams for 15 percent of their orbit – adding yet another challenge to the crew. Thankfully, our Apollo 13 astronauts survived and safely returned to Earth.
Now, our astronauts don’t have to worry about being disconnected from their teams! With the creation of the TDRS program in 1973, space communications coverage increased rapidly from 15 percent coverage to 85 percent coverage. And as we’ve continued to add TDRS spacecraft, coverage zoomed to over 98 percent!
TDRS is a fleet of satellites that beam data from low-Earth-orbiting space missions to scientists on the ground. These data range from cool galaxy images from the Hubble Space Telescope to high-def videos from astronauts on the International Space Station! TDRS is operated by our Space Network, and it is thanks to these hardworking engineers and scientists that we can continuously advance our knowledge about the universe!
What’s up next in space comm? Only the coolest stuff ever! LASER BEAMS. Our scientists are creating ways to communicate space data from missions through lasers, which have the ability to transfer more data per minute than typical radio-frequency systems. Both radio-frequency and laser comm systems send data at the speed of light, but with laser comm’s ability to send more data at a time through infrared waves, we can receive more information and further our knowledge of space.
How are we initiating laser comm? Our Laser Communications Relay Demonstration is launching in 2019! We’re only two short years away from beaming space data through lasers! This laser communications demo is the next step to strengthen this technology, which uses less power and takes up less space on a spacecraft, leaving more power and room for science instruments.
Watch the TDRS launch live online at 8:03 a.m. EDT on Aug. 18: https://www.nasa.gov/nasalive
Join the conversation on Twitter: @NASA_TDRS and @NASALasercomm!
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Little visual experiment by Henry Everett employs iOS ARKit to produce that familiar computer crash effect:
Getting some errors in #ARKit today. Cc: @comboldn pic.twitter.com/PWjS0npiUI
— Henry Everett (@henryeverett)
August 16, 2017
Link
Marble machine
Tele-Present Water by David Bowen
I rarely use the phrase ‘mind blown’, but this is one of those rare occurrences.
An art installation which combines real-time data, mechanical puppetry, and a physical grid representation usually employed virtually with computers:
This installation draws information from the intensity and movement of the water in a remote location. Wave data is being collected in real-time from National Oceanic and Atmospheric Administration data buoy station 46246, 49.985 N 145.089 W (49°59'7" N 145°5'20" W) on the Pacific Ocean. The wave intensity and frequency is scaled and transferred to the mechanical grid structure resulting in a simulation of the physical effects caused by the movement of water from halfway around the world.
Link to the artist’s website for this work can be found here
Dominik Koller is starting workshops to learn how to use creative coding platform which is used by many professionals in the interactive field starting this summer in Berlin - no experience required, only your own laptop:
We are launching our first ever full vvvv course in August 2017.
In eight weekly sessions, this course provides you with a strong foundation for using creative technology and building interactive interfaces.
No previous knowledge needed.
Each week, we will focus on a topic:
2x vvvv basics
Sound reactive visuals
2x Projection Mapping
Motion Tracking: Kinect
Arduino and Electronics
3D and Virtual Reality
More Here
Video game created by @slow-bros is an adventure whose assets were originally handmade to produce a stop-motion feel to the experience:
Harold Halibut is a modern adventure game, with a strong focus on storytelling and exploration. Set in a spaceship, stuck under sea on a distant water planet, you slip into the tiny shoes of Harold. As a young janitor and lab assistant to Professor Jeanne Mareaux, one of the lead scientists on board, he tries to help out in her attempt to find a way to relaunch the ship.
All that can be seen in the game is carefully built in a real-world workshop using classic sculpting, set building and clay and puppet fabrication techniques. We’re not even buying supplemental model train trees or anything.
Our love of stop-motion films, childhood nostalgia and respect for traditional craftsmanship are some reasons for this. Patience and taking a break from an ultra-fast paced digital reality are big factors as well.
The project has just released a kickstarter campaign which has more information, which you can find here
Hands-On Python & Xcode Image Processing: Build Games & Apps ☞ http://go.learn4startup.com/H1iINoD7z
#DeepLearning
Another experiment from LuluXXX exploring Machine Learning visual outputs, combining black and white image colourization, slic superpixels image segmentation and style2paint manga colourization on footage of Aya-Bambi dancing:
mixing slic superpixels and style2paint. starts with version2 than version4 plus a little breakdown at the end and all 4 versions of the algorithm. music : Spettro from NadjaLind mix - https://soundcloud.com/nadjalind/nadja-lind-pres-new-lucidflow-and-sofa-sessions-meowsic-mix original footage : https://www.youtube.com/watch?v=N2YhzBjiYUg
Link
Programming project from Or Fleisher and Anastasis Germanidis combines Augmented Reality and Machine Learning, using a Neural Net trained for age prediction through mobile camera device:
‘Death-Mask’ predicts how long people have to live and overlays that in the form of a “clock” above they’re heads in augmented reality. The project uses a machine learning model titled AgeNet for the prediction process. Once predicted it uses the average life expectancy in that location to try and estimate how long one has left.
The aesthetic inspiration derives from the concept of death masks. These are sculptures meant to symbolize the death of a person by casting his face into a sculpture (i.e mask).
The experiment uses ARKit to render the visual content in augmented reality on an iPad and CoreML to run the machine learning model in real-time. The project is by no means an accurate representation of one’s life expectancy and is more oriented towards the examination of public information in augmented reality in the age of deep learning.
Link
The ultimate puzzle!
Speculative Design video short from Benedict Hubener, Keyur Jain, and James Zhou of CIID imagines how the future of Smart City maintenence works, with on-site engineers handling Machine Learning equipment with surveillance infrastructure:
In the near future, cities are filled with smart infrastructure such as decentralized security cameras, self-sorting trashcans and intelligent street lights. But who do you call when smart things breaks? The future smart city is not a sci-fi dystopia made out of glass, concrete, and job stealing robots. It’s place much like our own and filled with the banality of everyday life and mundane jobs. Regardless of how you imagine the future smart city, someone needs to get in their white van, take out their ladder, and fix broken things.
The SMLT 3607A or Supervised Machine Learning Trainer is a tool for the future city maintenance worker. He/she can use the SMLT to interface with abnormally behaving smart infrastructure such as a surveillance camera identifying people as eggplants. He/she can retrain the smart camera by recording new examples in real time. The future maintenance worker will teach the camera what it’s seeing and curator the training dataset. He/she will help the camera learn the difference between people and objects and decide who should be classified as an upstanding citizen or a petty criminal.
Link