laossj - 无标题
无标题

295 posts

Latest Posts by laossj - Page 2

7 years ago
Doodle Studio 95
Doodle Studio 95
Doodle Studio 95
Doodle Studio 95

Doodle Studio 95

Project from Fernando Ramallo is a drawing and animation tool for Unity with simple interfaces to create assets for games and interactive experiences, a bit like Flash but in 2.5D:

DOODLE STUDIO 95 is a FUN drawing and animation tool for Unity.

Doodle an animation without leaving the Editor and turn your drawings into sprites, UI elements, particles or textures, with a single click.

Features

Draw inside the Unity Editor

Easy presets for backgrounds, characters and UI elements

Example scenes with 2.5D characters, foliage, speech bubbles and transitions, with reusable scripts

Draw and animate inside the Scene View (beta)

Shadow-casting shaders

Don’t think about materials or image formats, it Just Works.

Five Symmetry modes

Record mode adds frames as you draw

Record a sound with a single click! Boop!

Easy API for using animations with scripts

Convert to sprite sheets or GIFs

…and more

You can find out more here, and even try out a browser-based interactive tour here

7 years ago

Human 🧔🏽 vs. robot 🤖 | Our audience: #djiphantom4 #djiglobal #uav #3drobotics #djiinspire1 #quadcopter #miniquad #djiphantom3 #robotics #robot #aerialphotography #fpv #drones #hexacopter #octocopter #djiphantom #arduino #hobbyking #drone #multirotor #dronephotography #rcplane #spacex #sparkfun #adafruit #nasa #raspberrypi #mavicpro #skynet #blackmirror | Video by Boston Dynamics (at MIT School of Engineering)

7 years ago
Strange Backyard Sale
Strange Backyard Sale
Strange Backyard Sale
Strange Backyard Sale

Strange backyard sale

VR Artist Anna Zhilyaeva shares her first creation made with Tiltbrush in 2018:

This is my first Tilt Brush painting of 2018. I tried to make her look good from every angle while keeping the painting style.

You can view Anna’s work on Google Poly here

Link

7 years ago
Way To Artist
Way To Artist
Way To Artist
Way To Artist

Way to Artist

Installation by teamVOID uses industrial robots to perform life drawings alongside human artists:

‘Way to Artist’ has the purpose of rethinking the process of artistic creation through a comparison of robot and human actions. Drawing is assumed to be a creative activity that only humans are capable of. Nowadays, however, the emergence of artificial intelligence has some believing that artwork could be created by robots. In connection with this, the work involves drawings executed by a robot and a human, each with different drawing skills. In the process, it reconsiders the general meaning of the drawing activity. 

Whilst this isn’t the first example of this type of setup, it isn’t clear whether the robots have any visual interpretation model, so this could be a metaphorical rather than technical presentation.

Link

7 years ago
Hands-On Python & Xcode Image Processing: Build Games & Apps ☞ Http://go.learn4startup.com/H1iINoD7z

Hands-On Python & Xcode Image Processing: Build Games & Apps ☞ http://go.learn4startup.com/H1iINoD7z

#DeepLearning

7 years ago
Intel Core With Radeon RX Vega M Graphics Launched: HP, Dell, And Intel NUC Http://ift.tt/2CQpCuH

Intel Core with Radeon RX Vega M Graphics Launched: HP, Dell, and Intel NUC http://ift.tt/2CQpCuH

7 years ago
Everything you need to know about the SpaceX Falcon Heavy rocket
The maiden voyage of the SpaceX Falcon Heavy rocket is imminent. Here's everything you need to know about it.
7 years ago
4 Must Have Skills Every Data Scientist Should Learn
"We wanted to follow up our previous piece about how to grow as a data scientist with some other skills senior data scientists should have…
7 years ago
[Patreon] Best Of 2017 - Augmented Reality Photography
[Patreon] Best Of 2017 - Augmented Reality Photography
[Patreon] Best Of 2017 - Augmented Reality Photography
[Patreon] Best Of 2017 - Augmented Reality Photography
[Patreon] Best Of 2017 - Augmented Reality Photography
[Patreon] Best Of 2017 - Augmented Reality Photography

[Patreon] Best of 2017 - Augmented Reality Photography

For Patreon backers, I have put together a brief look at some projects that creatively explores the potential of Augmented Reality, which has been brought into the mainstream via Apple’s ARKit technology.

More Here

7 years ago
Imaginary Soundscape
Imaginary Soundscape
Imaginary Soundscape

Imaginary Soundscape

Online project from Qosmo generates ambient sounds to Google Streetview panoramas through Deep Learning processes, interpreting the visuals for appropriate sounds:

“Imaginary Soundscape” is a web-based sound installation, in which viewers can freely walk around Google Street View and immerse themselves into imaginary soundscape generated with deep learning models.

… Once trained, the rest was straightforward. For a given image from Google Street View, we can find the best-matched sound file from a pre-collected sound dataset, such that the output of SoundNet with the sound input is the most similar to the output of the CNN model for the image. As the sound dataset, we collected 15000 sound files from internet published under Creative Commons license and filtered with another CNN model on spectrogram trained to distinguish environmental/ambient sound from other types of audio (music, speech, etc.). 

You can try it out for yourself here, and find more background information here

7 years ago
Face Maker
Face Maker
Face Maker
Face Maker

Face Maker

iOS app by Tim Sears for iPhone X lets you make your own Augmented Reality face masks which you can draw or import an image from your camera roll:

Face Maker Augmented Augmented is an exciting new way to shape the face around you. Using the TrueDepth camera technology of the iPhone X, along with new capabilities of ARKit, you can create incredible face experiences like never before.

More Here

7 years ago
Uncanny Rd

Uncanny Rd

Project by Anastasis Germanidis and Cristóbal Valenzuela is an online Neural Network semantic painting tool. Trained on visual road data, you can doodle or place preset markers which will be visually translated into another image:

Uncanny Road is an experimental tool for collectively synthesizing a never-ending road using Generative Adversarial Neural Networks. It is based on the pix2pixHD project, published by @nvidia and UC Berkeley (Project, Paper and Code), that allows for photorealistic image-to-image translation. The pix2pix model was trained using adversarial learning on the Cityscapes dataset, containing thousands of street images.

To synthesize street images, draw on the colormap of the scene. Each color represents a different kind of object label (e.g. road, building, vegetation, etc.) that the neural network can understand

Try it out for yourself here

7 years ago
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites
CAN 2017 – Highlights And Favourites

CAN 2017 – Highlights and Favourites

The always-fantastic Art & Tech resource Creative Applications have put together their list of highlights from the year:

As 2017 comes to a close, we take a moment to look back at the outstanding work done this year. From spectacular performances, large scale installations, devices and tools to the new virtual spaces for artistic exploration – so many great projects are being added to the CAN archive! Here are a just few, 25 in total, that we and you enjoyed the most this year.

Have a look for yourself here

7 years ago

#BlueOrigin test launch was a success! | Our audience: #djiphantom4 #djiglobal #uav #yuneec #hexacopter #djiinspire1 #quadcopter #miniquad #affiliatemarketing #robotics #robot #amazon #fpv #drones #aerialphotography #amazonprime #robots #djiphantom #arduino #drone #tesla #elonmusk #rcplane #spacex #sparkfun #nasa #raspberrypi #mavicpro #jeffbezos via @theofficialblueorigin (at Van Horn, Texas)

7 years ago
Machine Learning With Python: Easy And Robust Method To Fit Nonlinear Data ☞ Https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49

Machine Learning with Python: Easy and robust method to fit nonlinear data ☞ https://towardsdatascience.com/machine-learning-with-python-easy-and-robust-method-to-fit-nonlinear-data-19e8a1ddbd49

7 years ago
7 years ago
Adversarial Examples That Fool Detectors
Adversarial Examples That Fool Detectors
Adversarial Examples That Fool Detectors

Adversarial Examples that Fool Detectors

Computer Vision research from Jiajun Lu, Hussein Sibai and Evan Fabry examines blocking neural network object detection using what appears to look like DeepDream-esque camouflage:

An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial. 

The paper can be found here

7 years ago

Reinventing the Wheel

Planning a trip to the Moon? Mars? You’re going to need good tires…

image

Exploration requires mobility. And whether you’re on Earth or as far away as the Moon or Mars, you need good tires to get your vehicle from one place to another. Our decades-long work developing tires for space exploration has led to new game-changing designs and materials. Yes, we’re reinventing the wheel—here’s why.

Wheels on the Moon

image

Early tire designs were focused on moving hardware and astronauts across the lunar surface. The last NASA vehicle to visit the Moon was the Lunar Roving Vehicle during our Apollo missions. The vehicle used four large flexible wire mesh wheels with stiff inner frames. We used these Apollo era tires as the inspiration for new designs using newer materials and technology to better function on a lunar surface.

Up springs a new idea

image

During the mid-2000s, we worked with industry partner Goodyear to develop the Spring Tire, an airless compliant tire that consists of several hundred coiled steel wires woven into a flexible mesh, giving the tires the ability to support high loads while also conforming to the terrain. The Spring Tire has been proven to generate very good traction and durability in soft sand and on rocks.

Spring Tires for Mars

image

A little over a year after the Mars Curiosity Rover landed on Mars, engineers began to notice significant wheel damage in 2013 due to the unexpectedly harsh terrain. That’s when engineers began developing new Spring Tire prototypes to determine if they would be a new and better solution for exploration rovers on Mars.

image

In order for Spring Tires to go the distance on Martian terrain, new materials were required. Enter nickel titanium, a shape memory alloy with amazing capabilities that allow the tire to deform down to the axle and return to its original shape.

These tires can take a lickin’

image

After building the shape memory alloy tire, Glenn engineers sent it to the Jet Propulsion Laboratory’s Mars Life Test Facility. It performed impressively on the punishing track.

Why reinvent the wheel? It’s worth it.

image

New, high performing tires would allow lunar and Mars rovers to explore greater regions of the surface than currently possible. They conform to the terrain and do not sink as much as rigid wheels, allowing them to carry heavier payloads for the same given mass and volume. Also, because they absorb energy from impacts at moderate to high speeds, there is potential for use on crewed exploration vehicles which are expected to move at speeds significantly higher than the current Mars rovers.

Airless tires on Earth

image

Maybe. Recently, engineers and materials scientists have been testing a spinoff tire version that would work on cars and trucks on Earth. Stay tuned as we continue to push the boundaries on traditional concepts for exploring our world and beyond.  

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.  

7 years ago
Furlexa
Furlexa

Furlexa

Project from Zach Levine modifies a Furby toy into an Amazon-powered home assistant with Alexa voice recognition:

I’m Sorry …

I thought I’d make Furby a little less annoying and give him an upgrade with a real brain. Step aside, Tin Man.

By combining a Raspberry Pi Zero W (a tiny computer), Amazon’s (mostly) open-source Alexa Voice Service software, and a few other electrical components, I converted my normal Furby into an Amazon Echo.

I give you: Furlexa.

It is my hope that you can either use this guide as a fun read or to build your own Furby Echo – I tried to write it in a style that any crowd would enjoy and I hope that I accomplished that goal.

More details on how to build your own can be found here

7 years ago
AYA-BAMBI - Slic + Style2paint
AYA-BAMBI - Slic + Style2paint
AYA-BAMBI - Slic + Style2paint
AYA-BAMBI - Slic + Style2paint

AYA-BAMBI - slic + style2paint

Another experiment from LuluXXX exploring Machine Learning visual outputs, combining black and white image colourization, slic superpixels image segmentation and style2paint manga colourization on footage of Aya-Bambi dancing:

mixing slic superpixels and style2paint. starts with version2 than version4 plus a little breakdown at the end and all 4 versions of the algorithm. music : Spettro from NadjaLind mix - https://soundcloud.com/nadjalind/nadja-lind-pres-new-lucidflow-and-sofa-sessions-meowsic-mix original footage : https://www.youtube.com/watch?v=N2YhzBjiYUg 

Link

7 years ago
Style2paint - ChrystaBell
Style2paint - ChrystaBell
Style2paint - ChrystaBell
Style2paint - ChrystaBell

Style2paint - ChrystaBell

Visual experiment from LuluXXX tranforms a music video using lllyasviel’s code designed to colorize black and white manga:

messing around with style2paint : https://github.com/lllyasviel/style2paints a lot of entropy going on. original video : https://www.youtube.com/watch?v=gGjJFRPtdDI Music performed by Twin Peaks cast member Chrysta Bell and David Lynch, written by David Lynch and Dean Hurley.

An online version of Style2paint can be found here

7 years ago
Stealth Hounds
Stealth Hounds

Stealth Hounds

Japanese Virtual Reality arcade VR-Zone Shinjuku is hosting a Field VR multiplayer experience based on Ghost In The Shell:

As a rookie of the special force team created by Motoko Kusanagi, join in the fight against the terrorist organization.

Employ iconic, powerful technology from Ghost in the Shell such as optical camouflage, prosthetic body, cyberbrain, etc. Become fully immersed and experience futuristic warfare for yourself. 

More Here

7 years ago
HotStepper
HotStepper

HotStepper

Augmented Reality app from Nexus Studios is offers geolocation wayfinder service with a virtual guide in the form of a half-naked gentleman:

HotStepper is your first Augmented Reality sidekick to any destination on Earth. HotStepper features a confident dude who, when he’s not dancing, will walk you to any location you need to go. All you need to do is go outside, pick a destination on the map and then just follow him as he does his thing. 

More Here

7 years ago
A Computer Vision System’s Walk Through Times Square
A Computer Vision System’s Walk Through Times Square
A Computer Vision System’s Walk Through Times Square

A Computer Vision System’s Walk Through Times Square

Video from deepython demonstrates an object recognition neural network framework applied to footage taken in New York:

This is a state of the art object detection framework called Faster R-CNN described here https://arxiv.org/abs/1506.01497 using tensorflow.

I took the following video and fed it through Tensorflow Faster R-CNN model, this isn’t running on an embedded device yet.

Link

7 years ago
Death Mask
Death Mask
Death Mask
Death Mask

Death Mask

Programming project from Or Fleisher and Anastasis Germanidis combines Augmented Reality and Machine Learning, using a Neural Net trained for age prediction through mobile camera device:

‘Death-Mask’ predicts how long people have to live and overlays that in the form of a “clock” above they’re heads in augmented reality.  The project uses a machine learning model titled AgeNet for the prediction process. Once predicted it uses the average life expectancy in that location to try and estimate how long one has left.

The aesthetic inspiration derives from the concept of death masks. These are sculptures meant to symbolize the death of a person by casting his face into a sculpture (i.e mask).

The experiment uses ARKit to render the visual content in augmented reality on an iPad and CoreML to run the machine learning model in real-time. The project is by no means an accurate representation of one’s life expectancy and is more oriented towards the examination of public information in augmented reality in the age of deep learning.

Link

7 years ago
Elastic Man
Elastic Man
Elastic Man

Elastic Man

Interactive webtoy from Adult Swim put together by David Li features an elastic Morty head which you can stretch and pull to your hearts desire.

Try it out for yourself here

7 years ago
A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up
"Pieter Abbeel, a Berkeley professor, is part of the team that has started Embodied Intelligence to make it possible for robots to learn on their own.
7 years ago

From Microscopic to Multicellular: Six Stories of Life that We See from Space

Life. It’s the one thing that, so far, makes Earth unique among the thousands of other planets we’ve discovered. Since the fall of 1997, NASA satellites have continuously and globally observed all plant life at the surface of the land and ocean. During the week of Nov. 13-17, we are sharing stories and videos about how this view of life from space is furthering knowledge of our home planet and the search for life on other worlds.

image

Earth is the only planet with life, as far as we know. From bacteria in the crevices of the deepest oceans to monkeys swinging between trees, Earth hosts life in all different sizes, shapes and colors. Scientists often study Earth from the ground, but some also look to our satellites to understand how life waxes and wanes on our planet.

Over the years, scientists have used this aerial view to study changes in animal habitats, track disease outbreaks, monitor forests and even help discover a new species. While this list is far from comprehensive, these visual stories of bacteria, plants, land animals, sea creatures and birds show what a view from space can reveal.

1. Monitoring the single-celled powerhouses of the sea

image

Known as the grass of the ocean, phytoplankton are one of the most abundant types of life in the ocean. Usually single-celled, these plant-like organisms are the base of the marine food chain. They are also responsible for the only long-term transfer of carbon dioxide from Earth’s atmosphere to the ocean. 

Even small changes in phytoplankton populations can affect carbon dioxide concentrations in the atmosphere, which could ultimately affect Earth’s global surface temperatures. Scientists have been observing global phytoplankton populations continuously since 1997 starting with the Sea-Viewing Wide Field-of View Sensor (SeaWiFS). They continue to study the small life-forms by satellite, ships and aircrafts.

2. Predicting cholera bacteria outbreaks

Found on the surface of zooplankton and in contaminated water, the bacteria that cause the infectious disease cholera — Vibrio cholerae — affect millions of people every year with severe diarrhea, sometimes leading to death. While our satellite sensors can’t detect the actual bacteria, scientists use various satellite data to look for the environmental conditions that the bacteria thrive in. 

Specifically, microbiologist Rita Colwell at the University of Maryland, College Park, and West Virginia University hydrologist Antar Jutla studied data showing air and ocean temperature, salinity, precipitation, and chlorophyllconcentrations, the latter a marker for zooplankton. Anticipating where the bacteria will bloom helps researchers to mitigate outbreaks.

image

Recently, Colwell and Jutla have been able to estimate cholera risk after major events, such as severe storms, by looking at satellite precipitation data, air temperature, and population maps. The two maps above show the team’s predicted cholera risk in Haiti two weeks after Hurricane Matthew hit over October 1-2, 2016 and the actual reported cholera cases in October 2016.

3. Viewing life on land

From helping preserve forests for chimpanzees to predicting deer population patterns, scientists use our satellites to study wildlife across the world. Satellites can also see the impacts of perhaps the most relatable animal to us: humans. Every day, we impact our planet in many ways including driving cars, constructing buildings and farming – all of which we can see with satellites.

From Microscopic To Multicellular: Six Stories Of Life That We See From Space

Our Black Marble image provides a unique view of human activity. Looking at trends in our lights at night, scientists can study how cities develop over time, how lighting and activity changes during certain seasons and holidays, and even aid emergency responders during power outages caused by natural disasters.

4. Tracking bird populations

Scientists use our satellite data to study birds in a variety of ways, from understanding their migratory patterns, to spotting potential nests, to tracking populations. In a rather creative application, scientists used satellite imagery to track Antarctica’s emperor penguin populations by looking for their guano – or excrement.

image

Counting emperor penguins from the ground perspective is challenging because they breed in some of the most remote and cold places in the world, and in colonies too large to easily count manually. With their black and white coats, emperor penguins are also difficult to count from an aerial view as they sometimes blend in with shadows on the ice. Instead, Phil Trathan and his colleagues at the British Antarctic Survey looked through Landsat imagery for brown stains on the sea ice. By looking for penguin droppings, Trathan said his team identified 54 emperor penguin colonies along the Antarctic coast.

5. Parsing out plant life

Just as we see plants grow and wilt on the ground, satellites observe the changes from space. Flourishing vegetation can indicate a lively ecosystem while changes in greenery can sometimes reveal natural disasters, droughts or even agricultural practices. While satellites can observe plant life in our backyards, scientists can also use them to provide a global picture. 

image

Using data from satellites including SeaWiFS, and instruments including the NASA/NOAA Visible Infrared Imaging Radiometer Suite and the Moderate Resolution Imaging Spectroradiometer, scientists have the most complete view of global biology to date, covering all of the plant life on land and at the surface of the ocean.

6. Studying life under the sea

Our satellites have helped scientists study creatures living in the oceans whether it’s finding suitable waters for oysters or protecting the endangered blue whale. Scientists also use the data to learn more about one of the most vulnerable ecosystems on the planet – coral reefs.

image

They may look like rocks or plants on the seafloor, but corals are very much living animals. Receiving sustenance from photosynthetic plankton living within their calcium carbonate structures, coral reefs provide food and shelter for many kinds of marine life, protect shorelines from storms and waves, serve as a source for potential medicines, and operate as some of the most diverse ecosystems on the planet.

image

However, coral reefs are vulnerable to the warming of the ocean and human activity. Our satellites measure the surface temperature of ocean waters. These measurements have revealed rising water temperatures surrounding coral reef systems around the world, which causes a phenomenon known as “coral bleaching.” To add to the satellite data, scientists use measurements gathered by scuba divers as well as instruments flown on planes.

During the week of Nov. 13-17, check out our stories and videos about how this view of life from space is furthering knowledge of our home planet and the search for life on other worlds. Follow at www.nasa.gov/Earth.

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.

7 years ago
10 Minutes Of Imaginary Japanese Anime Face
10 Minutes Of Imaginary Japanese Anime Face
10 Minutes Of Imaginary Japanese Anime Face
10 Minutes Of Imaginary Japanese Anime Face

10 Minutes of Imaginary Japanese Anime Face

Video from Yingtao Tian presents anime characters generated using GAN Neural Networks:

You can create your own using the webtoy MakeGirlsMoe here

7 years ago
Explore Tumblr Blog
Search Through Tumblr Tags