laossj - 无标题

laossj

无标题

295 posts

Latest Posts by laossj

laossj
2 years ago
laossj
3 years ago
Dildo Generator
Dildo Generator

Dildo Generator

Online 3D experiment by Ikaros Kappler which is described as a “Extrusion/Revolution Generator” ….

Created with three.js, you can alter the bezier curves and angle of the form, and is designed with 3D printing in mind (models can be exported and saved, as well as calculated weight in silicone).

Try it out for yourself (if you wish) here

laossj
7 years ago
Alternative Neural Edit
Alternative Neural Edit
Alternative Neural Edit
Alternative Neural Edit

Alternative Neural Edit

Latest project from @mario-klingemann employs Neural Networks trained on a collection of archive footage to recreate videos using the dataset.

It is confirmed that no human intervention has occured in the processed output, and it is interesting where there are convincing connections between the two (and where there apparently are none):

Destiny Pictures, Alternative Neural Edit, Side by Side Version 

This movie has been automatically collaged by an neural algorithm using the movie that Donald Trump’s gave as a present to Kim Jong Un as the template, replacing all scenes with visually similar scenes from public domain movies found in the internet archive. 

Neural Remake of “Take On Me” by A-Ha

An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage. The process is fully automatic, there are no manual edits.

Neural Reinterpretation of “Sabotage” by the Beastie Boys

An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage.

There are other video examples at Mario’s YouTube page (but some may not be viewable due to music copyright.

If you follow Mario’s Twitter timeline, you can get updated with the latest examples, and follow the evolution of the project [link]

laossj
7 years ago

Tele-Present Water by David Bowen

I rarely use the phrase ‘mind blown’, but this is one of those rare occurrences.

An art installation which combines real-time data, mechanical puppetry, and a physical grid representation usually employed virtually with computers:

This installation draws information from the intensity and movement of the water in a remote location. Wave data is being collected in real-time from National Oceanic and Atmospheric Administration data buoy station 46246, 49.985 N 145.089 W (49°59'7" N 145°5'20" W) on the Pacific Ocean. The wave intensity and frequency is scaled and transferred to the mechanical grid structure resulting in a simulation of the physical effects caused by the movement of water from halfway around the world.

Link to the artist’s website for this work can be found here

laossj
7 years ago
Tele-present Wind
Tele-present Wind
Tele-present Wind
Tele-present Wind

tele-present wind

Installation by David Bowen reproduces realtime wind data with a collection of mechanized stalks:

This installation consists of a series of 126 x/y tilting mechanical devices connected to thin dried plant stalks installed in a gallery and a dried plant stalk connected to an accelerometer installed outdoors. When the wind blows it causes the stalk outside to sway. The accelerometer detects this movement transmitting the motion to the grouping of devices in the gallery. Therefore the stalks in the gallery space move in real-time and in unison based on the movement of the wind outside.

May-September 2018 a newly expanded version of tele-present wind was installed at Azkuna Zentroa, Bilbao and the sensor was installed in an outdoor location adjacent to the Visualization and Digital Imaging Lab at the University of Minnesota. Thus the individual components of the installation in Spain moved in unison as they mimicked the direction and intensity of the wind halfway around the world. As it monitored and collected real-time data from this remote and distant location, the system relayed a physical representation of the dynamic and fluid environmental conditions.

More Here

Related: Another project by David from 2012 did something similar with ‘Tele-Present Water’ [Link]

laossj
7 years ago

9 Ocean Facts You Likely Don’t Know, but Should

Earth is a place dominated by water, mainly oceans. It’s also a place our researchers study to understand life. Trillions of gallons of water flow freely across the surface of our blue-green planet. Ocean’s vibrant ecosystems impact our lives in many ways. 

In celebration of World Oceans Day, here are a few things you might not know about these complex waterways.

1. Why is the ocean blue? 

image

The way light is absorbed and scattered throughout the ocean determines which colors it takes on. Red, orange, yellow,and green light are absorbed quickly beneath the surface, leaving blue light to be scattered and reflected back. This causes us to see various blue and violet hues.

2. Want a good fishing spot? 

image

Follow the phytoplankton! These small plant-like organisms are the beginning of the food web for most of the ocean. As phytoplankton grow and multiply, they are eaten by zooplankton, small fish and other animals. Larger animals then eat the smaller ones. The fishing industry identifies good spots by using ocean color images to locate areas rich in phytoplankton. Phytoplankton, as revealed by ocean color, frequently show scientists where ocean currents provide nutrients for plant growth.

3. The ocean is many colors. 

image

When we look at the ocean from space, we see many different shades of blue. Using instruments that are more sensitive than the human eye, we can measure carefully the fantastic array of colors of the ocean. Different colors may reveal the presence and amount of phytoplankton, sediments and dissolved organic matter.

4. The ocean can be a dark place. 

About 70 percent of the planet is ocean, with an average depth of more than 12,400 feet. Given that light doesn’t penetrate much deeper than 330 feet below the water’s surface (in the clearest water), most of our planet is in a perpetual state of darkness. Although dark, this part of the ocean still supports many forms of life, some of which are fed by sinking phytoplankton. 

5. We study all aspects of ocean life. 

image

Instruments on satellites in space, hundreds of kilometers above us, can measure many things about the sea: surface winds, sea surface temperature, water color, wave height, and height of the ocean surface.

6. In a gallon of average sea water, there is about ½ cup of salt. 

image

The amount of salt varies depending on location. The Atlantic Ocean is saltier than the Pacific Ocean, for instance. Most of the salt in the ocean is the same kind of salt we put on our food: sodium chloride.

7. A single drop of sea water is teeming with life.  

image

It will most likely have millions (yes, millions!) of bacteria and viruses, thousands of phytoplankton cells, and even some fish eggs, baby crabs, and small worms. 

8. Where does Earth store freshwater? 

image

Just 3.5 percent of Earth’s water is fresh—that is, with few salts in it. You can find Earth’s freshwater in our lakes, rivers, and streams, but don’t forget groundwater and glaciers. Over 68 percent of Earth’s freshwater is locked up in ice and glaciers. And another 30 percent is in groundwater. 

9. Phytoplankton are the “lungs of the ocean”.

image

Just like forests are considered the “lungs of the earth”, phytoplankton is known for providing the same service in the ocean! They consume carbon dioxide, dissolved in the sunlit portion of the ocean, and produce about half of the world’s oxygen. 

Want to learn more about how we study the ocean? Follow @NASAEarth on twitter.

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.  

laossj
7 years ago
YouTube Artifacts
YouTube Artifacts
YouTube Artifacts
YouTube Artifacts
YouTube Artifacts

YouTube Artifacts

Latest AR exhibition from MoMAR (who ran a guerilla show earlier this year) returns to the Pollock Room at MoMA New York featuring works by David Kraftsow, responsible for the YouTube Artififacts bot that regularly generates animated images from distorted videos:

Welcome to The Age of the Algorithm. A world in which automated processes are no longer simply tools at our disposal, but the single greatest omnipresent force currently shaping our world. For the most part, they remain unseen. Going about their business, mimicking human behavior and making decisions based on statistical analysis of what they ‘think’ is right. If the role of art in society is to incite reflection and ask questions about the state of our world, can algorithms be a part of determining and defining people’s artistic and cultural values? MoMAR presents a series of eight pieces created by David Kraftsow’s YouTube Artifact Bot.

More Here

laossj
7 years ago

Solar System: 10 Things to Know

Movie Night

Summer break is just around the corner. Hang a sheet from the clothesline in the backyard and fire up the projector for a NASA movie night.

1. Mars in a Minute

image

Back in the day, movies started with a cartoon. Learn the secrets of the Red Planet in these animated 60 second chunks.

2. Crash of the Titans

image

Watch two galaxies collide billions of years from now in this high-definition visualization.

3. Tour the Moon in 4K

image

Wait for the dark of the waning Moon next weekend to take in this 4K tour of our constant celestial companion.

4. Seven Years of the Sun

image

Watch graceful dances in the Sun’s atmosphere in this series of videos created by our 24/7 Sun-sentinel, the Solar Dynamic Observatory (SDO).

5. Light ‘Em Up

image

Crank up the volume and learn about NASA science for this short video about some of our science missions, featuring a track by Fall Out Boy.

6. Bennu’s Journey

image

Follow an asteroid from its humble origins to its upcoming encounter with our spacecraft in this stunning visualization.

7. Lunar Landing Practice

Join Apollo mission pilots as they fly—and even crash—during daring practice runs for landing on the Moon.

8. Earthrise

image

Join the crew of Apollo 8 as they become the first human beings to see the Earth rise over the surface of the Moon.

9. Musical Descent to Titan

image

Watch a musical, whimsical recreation of the 2005 Huygens probe descent to Titan, Saturn’s giant moon.

10. More Movies

image

Our Goddard Scientific Visualization Studio provides a steady stream of fresh videos for your summer viewing pleasure. Come back often and enjoy.

Read the full version of this article on the web HERE. 

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.  

laossj
7 years ago
Deep Video Portraits
Deep Video Portraits
Deep Video Portraits
Deep Video Portraits

Deep Video Portraits

Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect. 

More Here

laossj
7 years ago
Shape Representation By Zippables
Shape Representation By Zippables
Shape Representation By Zippables
Shape Representation By Zippables

Shape Representation by Zippables

Computational Fabrication research from the Interactive Geometry Lab can turn 3D model files into objects with textiles, connecting parts and forming shape using zip fasteners:

Fabrication from developable parts is the basis for arts such as papercraft and needlework, as well as modern architecture and CAD in general, and it has inspired much research. We observe that the assembly of complex 3D shapes created by existing methods often requires first fabricating many small parts and then carefully following instructions to assemble them together. Despite its significance, this error prone and tedious process is generally neglected in the discussion. We present the concept of zippables – single, two dimensional, branching, ribbon-like pieces of fabric that can be quickly zipped up without any instructions to form 3D objects. Our inspiration comes from the so-called zipit bags (just-zipit.com), which are made of a single, long ribbon with a zipper around its boundary. In order to assemble the bag, one simply needs to zip up the ribbon. Our method operates in the same fashion, but it can be used to approximate a wide variety of shapes. Given a 3D model, our algorithm produces plans for a single 2D shape that can be laser cut in few parts from fabric or paper. A zipper can then be attached along the boundary by sewing, or by gluing using a custom-built fastening rig. We show physical and virtual results that demonstrate the capabilities of our method and the ease with which shapes can be assembled. 

More Here

laossj
7 years ago

Nuevo control de estabilidad para moto by Bosch.

Consiste en una carga de aire comprimido que se libera para “apretar” la rueda delantera contra el suelo cuando se detecta una pérdida de agarre en el tren delantero.

image
laossj
7 years ago

math & music

When I was a freshman, studying music, I built my first computer program… and I didn’t even know I was coding:

image

At the time, I was learning to analyze chords by identifying the individual notes, reordering them into “thirds”, and comparing this stack to the actual arrangement to determine the inversion. I didn’t know anything about programming at the time, but my roommate was an engineer who showed me Wolfram Alpha’s Mathematica, a coding environment useful to a number of fields.

image

Well, I was just as “screw the rules” then, so I learned just enough to build a sort of decision tree to do my chord analysis homework for me. Above, nested If[] statements determine the interval by calculating the distance between pitches (in half-steps). Below, a similar set-up figures out the inversion of a chord.

image

There are a bunch of similarities to the JavaScript world I generally live in these days. It looks like Mathematica uses [] brackets instead of () parentheses and {} squiggly brackets, and presents its arguments more like an Excel function, but all the math-y bits certainly work the same… except… I wish Javascript let you string inequalities together like that!

image

One interesting peculiarity here - I have multiple functions with the same name. Whereas JavaScript functions don’t much care how many inputs you actually feed them, it seems I have different versions of the same keychordtype[] function for different numbers of inputs (defined here with a trailing _ underscore).

image

And instead of the console.log() message or the alert() pop-ups, outputs are made visible with the MessageDialogue[] function. So even though I don’t have any comments, and my nesting, naming, and order are a bit sloppy (look at those closing brackets! ridiculous!), I can still understand what’s going on - 10 years and several languages later.

tl;dr: music theory is math; different languages have different syntax, but logic is logic; Mathematica has a 2-week trial I’m eating though to take these screenshots

project: chord analysis homework helper

laossj
7 years ago
laossj - 无标题
laossj
7 years ago
Augmented Reality Game With Unique Semi-transmissive Rendering Method
Augmented Reality Game With Unique Semi-transmissive Rendering Method
Augmented Reality Game With Unique Semi-transmissive Rendering Method

Augmented reality game with unique semi-transmissive rendering method

Update to project from kidach1 of a game which features enemies with optical camouflage which you can uncover with filters (and it is also possible to play multiplayer):

You can keep track of progress at Twitter or Patreon

laossj
7 years ago
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj
7 years ago
Hype Cycle : Machine Learning
Hype Cycle : Machine Learning
Hype Cycle : Machine Learning

Hype Cycle : Machine Learning

Project from Universal Everything is a series of films exploring human-machine collaboration, here presenting performative dance with human and abstracted forms:

Hype Cycle ­is a series of futurist films exploring human-machine collaboration through performance and emerging technologies.

Machine Learning is the second set of films in the Hype Cycle series. It builds on the studio’s past experiments with motion studies, and asks: when will machines achieve human agility?

Set in a spacious, well-worn dance studio, a dancer teaches a series of robots how to move. As the robots’ abilities develop from shaky mimicry to composed mastery, a physical dialogue emerges between man and machine – mimicking, balancing, challenging, competing, outmanoeuvring.

Can the robot keep up with the dancer? At what point does the robot outperform the dancer? Would a robot ever perform just for pleasure? Does giving a machine a name give it a soul?

These human-machine interactions from Universal Everything are inspired by the Hype Cycle trend graphs produced by Gartner Research, a valiant attempt to predict future expectations and disillusionments as new technologies come to market.

More Here

laossj
7 years ago

THIS IS THE VIRTUAL REALITY I WAS PROMISED

Presenter Erika Ishii presents a wireless solution for Virtual Reality experiences, with a high powered laptop strapped to the back with an Htc Vive pro (though it isn’t clear how long the batteries will last):

THIS IS THE VIRTUAL REALITY I WAS PROMISED. @TeaganMorrison built us a wireless VR rig! @Alienware 15 laptop, @htcvive pro, army frame backpack. 

Source

laossj
7 years ago
FontCode
FontCode
FontCode
FontCode

FontCode

Research from Columbia Computer Graphics Group can create textual encryption by minute altering of font characteristics using neural networks:

We introduce FontCode, an information embedding technique for text documents. Provided a text document with specific fonts, our method embeds user-specified information in the text by perturbing the glyphs of text characters while preserving the text content. We devise an algorithm to choose unobtrusive yet machine-recognizable glyph perturbations, leveraging a recently developed generative model that alters the glyphs of each character continuously on a font manifold. We then introduce an algorithm that embeds a user-provided message in the text document and produces an encoded document whose appearance is minimally perturbed from the original document. We also present a glyph recognition method that recovers the embedded information from an encoded document stored as a vector graphic or pixel image, or even on a printed paper. In addition, we introduce a new error-correction coding scheme that rectifies a certain number of recognition errors. Lastly, we demonstrate that our technique enables a wide array of applications, using it as a text document metadata holder, an unobtrusive optical barcode, a cryptographic message embedding scheme, and a text document signature.

More Here

laossj
7 years ago
Scrying Pen
Scrying Pen
Scrying Pen

Scrying Pen

Webtoy by Andy Matuschak uses neural network-trained SketchRNN dataset to visualize in realtime potential sketch marks whilst you are drawing particular objects:

This pen’s ink stretches backwards into the past and forwards into possible futures. The two sides make a strange loop: the future ink influences how you draw, which in turn becomes the new “past” ink influencing further future ink.

Put another way: this is a realtime implementation of SketchRNN which predicts future strokes while you draw.

Currently works best in Chrome, you can try it out for yourself here

laossj
7 years ago
The Parallax View
The Parallax View
The Parallax View
The Parallax View

The Parallax View

Project from Peder Norrby is an IphoneX visual toy using TrueDepth facetracking to produce a Trompe-l'œil effect of depth from the position of your head:

Explainer video - enable sound! The app, called #TheParallaxView, is in review on @AppStore#iPhoneX #ARKit #FaceTracking #madewithunity pic.twitter.com/6P8ofGZqP4

— ΛLGΘMΨSΓIC (@algomystic)

February 28, 2018

Yes it’s ARKit face tracking and #madewithunity … basically non-symmetric camera frustum / off-axis projection.

The app is currently in review, but Peder plans to release the code to Github in the future for developers to experiment with.

You can follow progress at Peder’s Twitter account here

laossj
7 years ago
NSynth Super
NSynth Super
NSynth Super
NSynth Super

NSynth Super

Project from Google Creative Lab is an open source physical interface for their NSynth project, which generates news sounds using Machine Learning to understand them:

Building upon past research in this field, Magenta created NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics.

Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since the release of NSynth, Magenta have continued to experiment with different musical interfaces and tools to make the output of the NSynth algorithm more easily accessible and playable.

Using NSynth Super, musicians have the ability to explore more than 100,000 sounds generated with the NSynth algorithm.

More Here

laossj
7 years ago
Street Fighter II In The Real World
Street Fighter II In The Real World
Street Fighter II In The Real World
Street Fighter II In The Real World
Street Fighter II In The Real World

Street Fighter II in the real world

Proof of concept experience by Abhishek Singh makes classic game Street Fighter II playable in the real world using iOS ARKit and Unity, and can be played via two iPhones:

Remember the classic arcade game Street Fighter 2? I rebuilt it as a multiplayer AR game to actually take it into the streets. I’m calling it the Real World Warrior edition.

Link

laossj
7 years ago

What's Inside SOFIA? High Flying Instruments

image

Our flying observatory, called SOFIA, carries a 100-inch telescope inside a Boeing 747SP aircraft. Having an airborne observatory provides many benefits.

image

It flies at 38,000-45,000 feet – above 99% of the water vapor in Earth’s atmosphere that blocks infrared light from reaching the ground! 

image

It is also mobile! We can fly to the best vantage point for viewing the cosmos. We go to Christchurch, New Zealand, nearly every year to study objects best observed from the Southern Hemisphere. And last year we went to Daytona Beach, FL, to study the atmosphere of Neptune’s moon Triton while flying over the Atlantic Ocean.

image

SOFIA’s telescope has a large primary mirror – about the same size as the Hubble Space Telescope’s mirror. Large telescopes let us gather a lot of light to make high-resolution images!

image

But unlike a space-based observatory, SOFIA returns to our base every morning.

image

Which means that we can change the instruments we use to analyze the light from the telescope to make many different types of scientific observations. We currently have seven instruments, and new ones are now being developed to incorporate new technologies.

So what is inside SOFIA? The existing instruments include:

image

Infrared cameras that can peer inside celestial clouds of dust and gas to see stars forming inside. They can also study molecules in a nebula that may offer clues to the building blocks of life…

image

…A polarimeter, a device that measures the alignment of incoming light waves, that we use to study magnetic fields. The left image reveals that hot dust in the starburst galaxy M82 is magnetically aligned with the gas flowing out of it, shown in blue on the right image from our Chandra X-ray Observatory. This can help us understand how magnetic fields affect how stars form.

image

…A tracking camera that we used to study New Horizon’s post-Pluto flyby target and found that it may have its own moon…

image

…A spectrograph that spreads light into its component colors. We’re using one to search for signs of water plumes on Jupiter’s icy moon Europa and to search for signs of water on Venus to learn about how it lost its oceans…

image

…An instrument that studies high energy terahertz radiation with 14 detectors. It’s so efficient that we made this map of Orion’s Horsehead Nebula in only four hours! The map is made of 100 separate views of the nebula, each mapping carbon atoms at different velocities.

image

…And we have an instrument under construction that will soon let us study how water vapor, ice and oxygen combine at different times during planet formation, to better understand how these elements combine with dust to form a mass that can become a planet.

image

Our airborne telescope has already revealed so much about the universe around us! Now we’re looking for the next idea to help us use SOFIA in even more new ways. 

Discover more about our SOFIA flying observatory HERE. 

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.

laossj
7 years ago
Automatic Machine Knitting Of 3D Meshes
Automatic Machine Knitting Of 3D Meshes
Automatic Machine Knitting Of 3D Meshes
Automatic Machine Knitting Of 3D Meshes
Automatic Machine Knitting Of 3D Meshes

Automatic Machine Knitting of 3D Meshes

Research from Carnegie Mellon Textiles Lab have put forward a framework to turn 3D model file into a physical knitted object:

We present the first computational approach that can transform 3D meshes, created by traditional modeling programs, directly into instructions for a computer-controlled knitting machine. Knitting machines are able to robustly and repeatably form knitted 3D surfaces from yarn, but have many constraints on what they can fabricate. Given user-defined starting and ending points on an input mesh, our system incrementally builds a helix-free, quad-dominant mesh with uniform edge lengths, runs a tracing procedure over this mesh to generate a knitting path, and schedules the knitting instructions for this path in a way that is compatible with machine constraints. We demonstrate our approach on a wide range of 3D meshes.

More Here

laossj
7 years ago
HOVER BONES

HOVER BONES

Plus check out Glitch Black’s music on Bandcamp!

laossj
7 years ago
SP. 114 - Ghost In The Shell (2017)

SP. 114 - Ghost in the Shell (2017)

Repairing the robotic hand.

laossj
7 years ago
SP. 103 - Ghost In The Shell: The New Movie (2015)

SP. 103 - Ghost in the Shell: The New Movie (2015)

laossj
7 years ago
This Skeleton Robot Salamander Just Wiggled Its Way Into My Heart.

This skeleton robot salamander just wiggled its way into my heart.

laossj
7 years ago

Solar System: 10 Things to Know This Week

Pioneer Days

Someone’s got to be first. In space, the first explorers beyond Mars were Pioneers 10 and 11, twin robots who charted the course to the cosmos.

image

1-Before Voyager

image

Voyager, with its outer solar system tour and interstellar observations, is often credited as the greatest robotic space mission. But today we remember the plucky Pioneers, the spacecraft that proved Voyager’s epic mission was possible.

2-Where No One Had Gone Before

image

Forty-five years ago this week, scientists still weren’t sure how hard it would be to navigate the main asteroid belt, a massive field of rocky debris between Mars and Jupiter. Pioneer 10 helped them work that out, emerging from first the first six-month crossing in February 1973. Pioneer 10 logged a few meteoroid hits (fewer than expected) and taught engineers new tricks for navigating farther and farther beyond Earth.

3-Trailblazer No. 2

image

Pioneer 11 was a backup spacecraft launched in 1973 after Pioneer 10 cleared the asteroid belt. The new mission provided a second close look at Jupiter, the first close-up views of Saturn and also gave Voyager engineers plotting an epic multi-planet tour of the outer planets a chance to practice the art of interplanetary navigation.

4-First to Jupiter

image

Three-hundred and sixty-three years after humankind first looked at Jupiter through a telescope, Pioneer 10 became the first human-made visitor to the Jovian system in December 1973. The spacecraft spacecraft snapped about 300 photos during a flyby that brought it within 81,000 miles (about 130,000 kilometers) of the giant planet’s cloud tops.

5-Pioneer Family

image

Pioneer began as a Moon program in the 1950s and evolved into increasingly more complicated spacecraft, including a Pioneer Venus mission that delivered a series of probes to explore deep into the mysterious toxic clouds of Venus. A family portrait (above) showing (from left to right) Pioneers 6-9, 10 and 11 and the Pioneer Venus Orbiter and Multiprobe series. Image date: March 11, 1982. 

6-A Pioneer and a Pioneer

image

Classic rock has Van Halen, we have Van Allen. With credits from Explorer 1 to Pioneer 11, James Van Allen was a rock star in the emerging world of planetary exploration. Van Allen (1914-2006) is credited with the first scientific discovery in outer space and was a fixture in the Pioneer program. Van Allen was a key part of the team from the early attempts to explore the Moon (he’s pictured here with Pioneer 4) to the more evolved science platforms aboard Pioneers 10 and 11.

7-The Farthest…For a While

image

For more than 25 years, Pioneer 10 was the most distant human-made object, breaking records by crossing the asteroid belt, the orbit of Jupiter and eventually even the orbit of Pluto. Voyager 1, moving even faster, claimed the most distant title in February 1998 and still holds that crown.

8-Last Contact

image

We last heard from Pioneer 10 on Jan. 23, 2003. Engineers felt its power source was depleted and no further contact should be expected. We tried again in 2006, but had no luck. The last transmission from Pioneer 11 was received in September 1995. Both missions were planned to last about two years.

9-Galactic Ghost Ships

image

Pioneers 10 and 11 are two of five spacecraft with sufficient velocity to escape our solar system and travel into interstellar space. The other three—Voyagers 1 and 2 and New Horizons—are still actively talking to Earth. The twin Pioneers are now silent. Pioneer 10 is heading generally for the red star Aldebaran, which forms the eye of Taurus (The Bull). It will take Pioneer over 2 million years to reach it. Pioneer 11 is headed toward the constellation of Aquila (The Eagle) and will pass nearby in about 4 million years.

10-The Original Message to the Cosmos

image

Years before Voyager’s famed Golden Record, Pioneers 10 and 11 carried the original message from Earth to the cosmos. Like Voyager’s record, the Pioneer plaque was the brainchild of Carl Sagan who wanted any alien civilization who might encounter the craft to know who made it and how to contact them. The plaques give our location in the galaxy and depicts a man and woman drawn in relation to the spacecraft.

Read the full version of this week’s 10 Things article HERE. 

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.

laossj
7 years ago
Jelly Mario
Jelly Mario
Jelly Mario

Jelly Mario

Web demo by Stefan Hedman adds elastic physics to classic Super Mario level, currently in pre-alpha. You need a keyboard (play with arrow keys) - to start, move towards the right side of the screen:

[video from Robert McGregor]

Play around with it yourself here

Explore Tumblr Blog
Search Through Tumblr Tags