Earth is a place dominated by water, mainly oceans. It’s also a place our researchers study to understand life. Trillions of gallons of water flow freely across the surface of our blue-green planet. Ocean’s vibrant ecosystems impact our lives in many ways.
In celebration of World Oceans Day, here are a few things you might not know about these complex waterways.
The way light is absorbed and scattered throughout the ocean determines which colors it takes on. Red, orange, yellow,and green light are absorbed quickly beneath the surface, leaving blue light to be scattered and reflected back. This causes us to see various blue and violet hues.
Follow the phytoplankton! These small plant-like organisms are the beginning of the food web for most of the ocean. As phytoplankton grow and multiply, they are eaten by zooplankton, small fish and other animals. Larger animals then eat the smaller ones. The fishing industry identifies good spots by using ocean color images to locate areas rich in phytoplankton. Phytoplankton, as revealed by ocean color, frequently show scientists where ocean currents provide nutrients for plant growth.
When we look at the ocean from space, we see many different shades of blue. Using instruments that are more sensitive than the human eye, we can measure carefully the fantastic array of colors of the ocean. Different colors may reveal the presence and amount of phytoplankton, sediments and dissolved organic matter.
About 70 percent of the planet is ocean, with an average depth of more than 12,400 feet. Given that light doesn’t penetrate much deeper than 330 feet below the water’s surface (in the clearest water), most of our planet is in a perpetual state of darkness. Although dark, this part of the ocean still supports many forms of life, some of which are fed by sinking phytoplankton.
Instruments on satellites in space, hundreds of kilometers above us, can measure many things about the sea: surface winds, sea surface temperature, water color, wave height, and height of the ocean surface.
The amount of salt varies depending on location. The Atlantic Ocean is saltier than the Pacific Ocean, for instance. Most of the salt in the ocean is the same kind of salt we put on our food: sodium chloride.
It will most likely have millions (yes, millions!) of bacteria and viruses, thousands of phytoplankton cells, and even some fish eggs, baby crabs, and small worms.
Just 3.5 percent of Earth’s water is fresh—that is, with few salts in it. You can find Earth’s freshwater in our lakes, rivers, and streams, but don’t forget groundwater and glaciers. Over 68 percent of Earth’s freshwater is locked up in ice and glaciers. And another 30 percent is in groundwater.
Just like forests are considered the “lungs of the earth”, phytoplankton is known for providing the same service in the ocean! They consume carbon dioxide, dissolved in the sunlit portion of the ocean, and produce about half of the world’s oxygen.
Want to learn more about how we study the ocean? Follow @NASAEarth on twitter.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.
Installation by David Bowen reproduces realtime wind data with a collection of mechanized stalks:
This installation consists of a series of 126 x/y tilting mechanical devices connected to thin dried plant stalks installed in a gallery and a dried plant stalk connected to an accelerometer installed outdoors. When the wind blows it causes the stalk outside to sway. The accelerometer detects this movement transmitting the motion to the grouping of devices in the gallery. Therefore the stalks in the gallery space move in real-time and in unison based on the movement of the wind outside.
May-September 2018 a newly expanded version of tele-present wind was installed at Azkuna Zentroa, Bilbao and the sensor was installed in an outdoor location adjacent to the Visualization and Digital Imaging Lab at the University of Minnesota. Thus the individual components of the installation in Spain moved in unison as they mimicked the direction and intensity of the wind halfway around the world. As it monitored and collected real-time data from this remote and distant location, the system relayed a physical representation of the dynamic and fluid environmental conditions.
More Here
Related: Another project by David from 2012 did something similar with ‘Tele-Present Water’ [Link]
Meet Cassie, a sleek bi-pedal robot made by Agility Robotics
Two Cassies decide to take a walking tour of our office. No CG: 100% actual robots.
More Here
https://github.com/yahoo/samoa
Machine learning and data mining are well established techniques in the world of IT and especially among web companies and startups. Spam detection, personalization and recommendations are just a few of the applications made possible by mining the huge quantity of data available nowadays. However, “big data” is not only about Volume, but also about Velocity (and Variety, 3V of big data).
The usual pipeline for modeling data (what “data scientists” do) involves taking a sample from production data, cleaning and preprocessing it to make it usable, training a model for the task at hand and finally deploying it to production. The final output of this process is a pipeline that needs to run periodically (and be maintained) in order to keep the model up to date. Hadoop and its ecosystem (e.g., Mahout) have proven to be an extremely successful platform to support this process at web scale.
However, no solution is perfect and big data is “data whose characteristics forces us to look beyond the traditional methods that are prevalent at the time”. The current challenge is to move towards analyzing data as soon as it arrives into the system, nearly in real-time.
For example, models for mail spam detection get outdated with time and need to be retrained with new data. New data (i.e., spam reports) comes in continuously and the model starts being outdated the moment it is deployed: all the new data is sitting without creating any value until the next model update. On the contrary, incorporating new data as soon as it arrives is what the “Velocity” in big data is about. In this case, Hadoop is not the ideal tool to cope with streams of fast changing data.
Distributed stream processing engines are emerging as the platform of choice to handle this use case. Examples of these platforms are Storm, S4, and recently Samza. These platforms join the scalability of distributed processing with the fast response of stream processing. Yahoo has already adopted Storm as a key technology for low-latency big data processing.
Alas, currently there is no common solution for mining big data streams, that is, for doing machine learning on streams on a distributed environment.
SAMOA (Scalable Advanced Massive Online Analysis) is a framework for mining big data streams. As most of the big data ecosystem, it is written in Java. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm and S4. SAMOA includes distributed algorithms for the most common machine learning tasks such as classification and clustering. For a simple analogy, you can think of SAMOA as Mahout for streaming.
SAMOA is both a platform and a library. As a platform, it allows the algorithm developer to abstract from the underlying execution engine, and therefore reuse their code to run on different engines. It also allows to easily write plug-in modules to port SAMOA to different execution engines.
As a library, SAMOA contains state-of-the-art implementations of algorithms for distributed machine learning on streams. The first alpha release allows classification and clustering.
For classification, we implemented a Vertical Hoeffding Tree (VHT), a distributed streaming version of decision trees tailored for sparse data (e.g., text). For clustering, we included a distributed algorithm based on CluStream. The library also includes meta-algorithms such as bagging.
An algorithm in SAMOA is represented by a series of nodes communicating via messages along streams that connect pairs of nodes (a graph). Borrowing the terminology from Storm, this is called a Topology. Each node in the Topology is a Processor that sends messages to a Stream. The user code that implements the algorithm resides inside a Processor. Figure 3 shows an example of a Processor joining two stream from two source Processors. Here is a code snippet to build such a topology in SAMOA.
TopologyBuilder builder; Processor sourceOne = new SourceProcessor(); builder.addProcessor(sourceOne); Stream streamOne = builder.createStream(sourceOne); Processor sourceTwo = new SourceProcessor(); builder.addProcessor(sourceTwo); Stream streamTwo = builder.createStream(sourceTwo); Processor join = new JoinProcessor(); builder.addProcessor(join).connectInputShuffle(streamOne).connectInputKey(streamTwo);
1. Download SAMOA
git clone git@github.com:yahoo/samoa.git cd samoa mvn -Pstorm package
2. Download the Forest CoverType dataset.
wget "http://downloads.sourceforge.net/project/moa-datastream/Datasets/Classification/covtypeNorm.arff.zip" unzip covtypeNorm.arff.zip
Forest CoverType contains the forest cover type for 30 x 30 meter cells obtained from US Forest Service (USFS) Region 2 Resource Information System (RIS) data. It contains 581,012 instances and 54 attributes, and it has been used in several papers on data stream classification.
3. Download a simple logging library.
wget "http://repo1.maven.org/maven2/org/slf4j/slf4j-simple/1.7.2/slf4j-simple-1.7.2.jar"
4. Run an Example. Classifying the CoverType dataset with the VerticalHoeffdingTree in local mode.
java -cp slf4j-simple-1.7.2.jar:target/SAMOA-Storm-0.0.1.jar com.yahoo.labs.samoa.DoTask "PrequentialEvaluation -l classifiers.trees.VerticalHoeffdingTree -s (ArffFileStream -f covtypeNorm.arff) -f 100000"
The output will be a sequence of the evaluation metrics for accuracy, taken every 100,000 instances.
To run the example on Storm, please refer to the instructions on the wiki.
For more information about SAMOA, see the README and the wiki on github, or post a question on the mailing list.
SAMOA is licensed under an Apache Software License v2.0. You are welcome to contribute to the project! SAMOA accepts contributions under an Apache style contributor license agreement.
Good luck! We hope you find SAMOA useful. We will continue developing the framework by adding new algorithms and platforms.
Gianmarco De Francisci Morales (gdfm@yahoo-inc.com) and Albert Bifet (abifet@yahoo.com) @ Yahoo Labs Barcelona
Japanese programmer has unveiled proof-of-concept effects for Augmented Reality game made with ARKit including visual filters and Predator-like optical camouflage:
ミッション1【野良アンドロイド(光学迷彩搭載機)の発見・確保】 #ARKit pic.twitter.com/7m0esEGrUt
— kidachi (@kidach1) August 19, 2017
[Bing Translation:] Mission 1 [Nora Android (optical camouflage aircraft) find & secure] #ARKit
You can follow Kidachi on Twitter here
Back in the day, movies started with a cartoon. Learn the secrets of the Red Planet in these animated 60 second chunks.
Watch two galaxies collide billions of years from now in this high-definition visualization.
Wait for the dark of the waning Moon next weekend to take in this 4K tour of our constant celestial companion.
Watch graceful dances in the Sun’s atmosphere in this series of videos created by our 24/7 Sun-sentinel, the Solar Dynamic Observatory (SDO).
Crank up the volume and learn about NASA science for this short video about some of our science missions, featuring a track by Fall Out Boy.
Follow an asteroid from its humble origins to its upcoming encounter with our spacecraft in this stunning visualization.
Join Apollo mission pilots as they fly—and even crash—during daring practice runs for landing on the Moon.
Join the crew of Apollo 8 as they become the first human beings to see the Earth rise over the surface of the Moon.
Watch a musical, whimsical recreation of the 2005 Huygens probe descent to Titan, Saturn’s giant moon.
Our Goddard Scientific Visualization Studio provides a steady stream of fresh videos for your summer viewing pleasure. Come back often and enjoy.
Read the full version of this article on the web HERE.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.
Noodle tearing up the dance floor
Tele-Present Water by David Bowen
I rarely use the phrase ‘mind blown’, but this is one of those rare occurrences.
An art installation which combines real-time data, mechanical puppetry, and a physical grid representation usually employed virtually with computers:
This installation draws information from the intensity and movement of the water in a remote location. Wave data is being collected in real-time from National Oceanic and Atmospheric Administration data buoy station 46246, 49.985 N 145.089 W (49°59'7" N 145°5'20" W) on the Pacific Ocean. The wave intensity and frequency is scaled and transferred to the mechanical grid structure resulting in a simulation of the physical effects caused by the movement of water from halfway around the world.
Link to the artist’s website for this work can be found here
For Patreon backers, I have put together a brief look at some projects that creatively explores the potential of Augmented Reality, which has been brought into the mainstream via Apple’s ARKit technology.
More Here
#FridayFunFact: VR & AR are fast becoming the latest digital trend (and next marketing platform target). This is an interesting projection of what the market could be like for VR/AR apllications.