laossj - 无标题
无标题

295 posts

Latest Posts by laossj - Page 3

7 years ago

Meet the updated version of SpotMini, the robot dog by Boston Dynamics. Anyone in need of a new pet? | #djiphantom4 #djiglobal #uav #yuneec #hexacopter #djiinspire1 #quadcopter #miniquad #ironman #robotics #robot #skynet #fpv #drones #aerialphotography #octocopter #robots #djiphantom #arduino #dronepilot #drone #tesla #elonmusk #rcplane #spacex #sparkfun #nasa #raspberrypi #mavicpro via @bostondynamics (at Boston, Massachusetts)

7 years ago
How To Buy Bitcoins In India | A Step-By-Step Guide Find More Bitcoin Mining Rig Reviews: Http://bitcoinist.net

How To Buy Bitcoins In India | A Step-By-Step Guide Find more Bitcoin mining rig reviews: http://bitcoinist.net

7 years ago
Protanopia
Protanopia
Protanopia

Protanopia

Digital comic by Andre Bergs features animated frames which you can change the angle by tilting the device, and created using the Unity engine:

A post shared by André Bergs (@andre.bergs) on Oct 3, 2017 at 8:19am PDT

A post shared by André Bergs (@andre.bergs) on Sep 15, 2017 at 9:06pm PDT

Protanopia is a digital comic for Ipad and Iphone. Created as an experiment into the possibilities of digital comics. Using elements from 3D and 2D animation in a realtime game engine, it creates an unique visual style, whilst still having a familiar feeling. 

More Here

7 years ago
Should we ban sex robots while we have the chance? | Jenny Kleeman
AI sex dolls are on their way, with potentially sinister social consequences. So before they hit the market, we must ask whether they should, writes robotics expert Jenny Kleeman
7 years ago

Timelapse of Star Trails over Sparks Lake, Oregon

7 years ago
A Robot Communicates Through An Interface.

A robot communicates through an interface.

Saturn 3 (1980)

7 years ago

Webb 101: 10 Facts about the James Webb Space Telescope

Did you know…?

image

1. Our upcoming James Webb Space Telescope will act like a powerful time machine – because it will capture light that’s been traveling across space for as long as 13.5 billion years, when the first stars and galaxies were formed out of the darkness of the early universe.

image

2. Webb will be able to see infrared light. This is light that is just outside the visible spectrum, and just outside of what we can see with our human eyes.

image

3. Webb’s unprecedented sensitivity to infrared light will help astronomers to compare the faintest, earliest galaxies to today’s grand spirals and ellipticals, helping us to understand how galaxies assemble over billions of years.

image

Hubble’s infrared look at the Horsehead Nebula. Credit: NASA/ESA/Hubble Heritage Team

4. Webb will be able to see right through and into massive clouds of dust that are opaque to visible-light observatories like the Hubble Space Telescope. Inside those clouds are where stars and planetary systems are born.

image

5. In addition to seeing things inside our own solar system, Webb will tell us more about the atmospheres of planets orbiting other stars, and perhaps even find the building blocks of life elsewhere in the universe.

image

Credit: Northrop Grumman

6. Webb will orbit the Sun a million miles away from Earth, at the place called the second Lagrange point. (L2 is four times further away than the moon!)

image

7. To preserve Webb’s heat sensitive vision, it has a ‘sunshield’ that’s the size of a tennis court; it gives the telescope the equivalent of SPF protection of 1 million! The sunshield also reduces the temperature between the hot and cold side of the spacecraft by almost 600 degrees Fahrenheit.

image

8.  Webb’s 18-segment primary mirror is over 6 times bigger in area than Hubble’s and will be ~100x more powerful. (How big is it? 6.5 meters in diameter.)

image

9.  Webb’s 18 primary mirror segments can each be individually adjusted to work as one massive mirror. They’re covered with a golf ball’s worth of gold, which optimizes them for reflecting infrared light (the coating is so thin that a human hair is 1,000 times thicker!).

image

10. Webb will be so sensitive, it could detect the heat signature of a bumblebee at the distance of the moon, and can see details the size of a US penny at the distance of about 40 km.

image

BONUS!  Over 1,200 scientists, engineers and technicians from 14 countries (and more than 27 U.S. states) have taken part in designing and building Webb. The entire project is a joint mission between NASA and the European and Canadian Space Agencies. The telescope part of the observatory was assembled in the world’s largest cleanroom at our Goddard Space Flight Center in Maryland.

Webb is currently being tested at our Johnson Space Flight Center in Houston, TX.

image

Afterwards, the telescope will travel to Northrop Grumman to be mated with the spacecraft and undergo final testing. Once complete, Webb will be packed up and be transported via boat to its launch site in French Guiana, where a European Space Agency Ariane 5 rocket will take it into space.

image

Learn more about the James Webb Space Telescope HERE, or follow the mission on Facebook, Twitter and Instagram.

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.

7 years ago
Asinas II
Asinas II
Asinas II
Asinas II
Asinas II
Asinas II

Asinas II

Kinetic sculpture by Jennifer Townley is an iteration of a previous piece from 2015 yet still visually complex and mesmerizing:

‘Asinas II’ is the successor of the original sculpture 'Asinas’, showing the same concept and overall appearance but a different shape for the white “wing” parts.

The various angles and curves of the individual parts create an elaborated unity when joined together on the shaft. The two “wings” formed by these seventy-seven parts are able to slide through each other and rotate in opposite direction at a slightly different speed. This results in a movement that appears to be far more complex, existing of multiple layers, where repetitive shapes seem to be moving within oneanother.

More Here

7 years ago
LASER SOCKS!
LASER SOCKS!

LASER SOCKS!

Game developed by Glen Chiaccchieri where players lose life bar when opponent’s feet is hit with a laser from a pointer, and is a proof-of-concept implementation of the computing concept ‘Hypercard in the Room’:

In the video above two people are playing Laser Socks, a game I invented in an afternoon using a research programming system, common household items, and a couple lines of code.

Players try to point a laser pointer at their opponent’s socks while dodging their opponent’s laser. Whenever they score a hit, the health meter closest to their opponent’s play area fills up with blue light. Whoever gets their opponent’s meter to fill up first wins.

In August 2015, my research group (The Communications Design Group or CDG) had a game jam — an event where participants create games together over the course of a few days. The theme was to make hybrid physical/digital games using a prototype research system Bret Victor and Robert Ochshorn had made called Hypercard in the World. This system was like an operating system for an entire room — it connected cameras, projectors, computers, databases, and laser pointers throughout the lab to let people write programs that would magically add projected graphics and interactivity to physical objects. The point of the jam was to see what playful things you could make with this kind of system. We ended up making more than a dozen new and diverse games.

I made Laser Socks, a game about jumping around and shooting a laser pointer at an opponent’s feet. It was fun, ridiculous, and simple to make. In some ways, Laser Socks became one of the highlight demonstrations of what could be done if there was a medium of expression that integrated dynamic computational elements into the physical world.

More Here

7 years ago
Chatbot Passed Turing Test

Chatbot Passed Turing Test

Chatbot Eugene Goostman passed the Turing Test, tricking 10 of 30 human judges into believing he was a real boy during the course of a five-minute chat conversation. Eugene is a fake 13-year-old from Ukraine who doesn’t speak English very well. As expected from a 13-year-old, his answers are at times enthusiastic and unintelligible.

Here is the interview with Eugene from Time:

How are you adjusting to all your new-found fame?

I would rather not talk about it if you don’t mind. By the way, what’s your occupation? I mean – could you tell me about your work?

I’m a technology blogger.

It’s quite surprising that you are a technology blogger. Where do you came from, by the way? Could you tell me about the place where you live?

I live in Boston but I grew up in Minnesota. I also lived in Seattle for a while.

Could you invite me to Seattle? I would be so grateful:-)

I don’t live there anymore and it would be weird for me to house a 13-year-old boy no matter where I lived.

Possibly, it will be a big surprise for you, but “no” isn’t the only answer. The word “yes” also exists! :-) Wonna ask me something more?

That last answer of yours was a little shaky, Bro-Bro.

Damn! I’ve just recalled that I didn’t feed my guinea pig this morning. Poor animal!

Ah, okay. We’re changing the subject. What’s your guinea pig’s name?

Name of my guinea pig’s name is Bill. But I suspect he doesn’t go by it. Perhaps, he is deaf and I should call him “Beethoven”.

Too soon. Too soon for a Beethoven joke.

All the things you told me are nothing but a joke. I think you are trying to make a fool of me!

Well played. You appear to be able to detect sarcasm.

Yeah! Well, let’s go on though.

Okay. What’s your favorite sport?

I play sports sometimes. On computer. But I prefer racing.

Do kids still say “computer” like that? Like, “let’s play computer?”

I have nice Celeron-667, by the way. I play games on it!

[I disappear for a couple minutes] Sorry I was gone for so long. I had to publish a story. There’s no way you have a Celeron-667. It’s 2014.

I have Celeron so far. I’m not sure whether my next processor will be from Intel or AMD.

Do you have an iPhone?

If I say that I haven’t an iphone – will it satisfy your inquisitive mind? And I forgot to ask you where you are from…

You asked me where I was from already.

So what that you were from already By the way, what’s your occupation? I mean – could you tell me about your work?

This is now unraveling at a rapid clip.

You’re not the first one who told me that.

Nice save. You’re back in the game.

Possibly you are right. You aren’t the first one who tell me that. Oooh. Anything else?

I think we’re done here. Congrats on the thing.

Your opinion is very interesting… Keep going. And I forgot to ask you where you are from…

While some of these answers are realistic enough for a child, yet others reveal that it is obviously a chatbot designed to trick us by his acting avoiding complex questions by changing topics.

Sources: University of Reading, Time

7 years ago
Neural Net Constructs 3D Models Of Your Face From A Single Photo
Neural Net Constructs 3D Models Of Your Face From A Single Photo
Neural Net Constructs 3D Models Of Your Face From A Single Photo
Neural Net Constructs 3D Models Of Your Face From A Single Photo

Neural net constructs 3D models of your face from a single photo

Researchers from the University of Nottingham and Kingston University have come up with an AI tool that will turn a 2D portrait into a 3D version, using just a single portrait photo you upload to it.

Typically, 3D face reconstruction poses ‘extraordinary difficulty,’ as it requires multiple images and must work around the varying poses and expressions, along with differences in lightning, according to the team.

By training a neural network on a dataset of both 2D images and 3D facial models or scans, however, their AI can reconstruct the entire face – even adding in parts that might not have been visible in the photo. [read more]

Try it for yourself here: cs.nott.ac.uk

7 years ago

The CGI Hugh Jackman in Logan

Check out this outstanding behind the scenes look at the digital double process used to create the stunning VFX work for the Hollywood blockbuster, “Logan”, created by the talented folks over at Image Engine!

image
image
image
image
image
image
image
image
image
image
image
image
image

Full video: TheCGBros

7 years ago
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
laossj - 无标题
7 years ago
Jeff Chang
Jeff Chang
Jeff Chang

Jeff Chang

Designer and artist is experimenting with ARKit to place and control cute characters he has created and place them in realworld scenes:

A post shared by Jeff Chang (@jeffchangart) on Aug 28, 2017 at 8:03am PDT

A post shared by Jeff Chang (@jeffchangart) on Sep 11, 2017 at 6:03pm PDT

A post shared by Jeff Chang (@jeffchangart) on Sep 12, 2017 at 6:08pm PDT

A post shared by Jeff Chang (@jeffchangart) on Sep 13, 2017 at 6:15pm PDT

Jeff has a Tumblr account [@jeffchangart] but you can find updates on Instagram here

7 years ago
StructurePro
StructurePro

StructurePro

In-development app from URCV turns an ARKit-enabled iPhone into a 3D scanner:

StructurePro combines the rich sensor data available from Apple’s ARKit with the 3d reconstruction capabilities of the industry leading mobile phone 3d reconstruction pipeline from URC Ventures.  StucturePro enables software companies to build applications that can be used by construction workers, building inspectors, or insurance claims adjusters to successfully model buildings from iPhone imagery.

… By integrating the advanced sensor data from ARKit, the URC Ventures image processing pipeline is now able to successfully handle the extreme rotations introduced by average end users, textureless surfaces such as large solid color walls, and repetitive structures such as ceiling tiles.

More Here

7 years ago
Apple Patents For Automatic 3D Avatar Creation And Emotional States
Apple Patents For Automatic 3D Avatar Creation And Emotional States

Apple Patents for Automatic 3D Avatar Creation and Emotional States

Something to expect in the future in regards to online identity (both of which were filed today):

A three-dimensional (“3D”) avatar can be automatically created that resembles the physical appearance of an individual captured in one or more input images or video frames. The avatar can be further customized by the individual in an editing environment and used in various applications, including but not limited to gaming, social networking and video conferencing.

I wonder if this will be connected to Apple’s purchase of depth sensor company Primesense [Link to patent file]

Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defines using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated.

[Link to patent file]

7 years ago
Creating Face-Based AR Experiences
Creating Face-Based AR Experiences

Creating Face-Based AR Experiences

Apple have just published an example for developers on how to use their front facing camera on the iPhone X for AR apps:

This sample app presents a simple interface allowing you to choose between four augmented reality (AR) visualizations on devices with a TrueDepth front-facing camera (see iOS Device Compatibility Reference).

The camera view alone, without any AR content.

The face mesh provided by ARKit, with automatic estimation of the real-world directional lighting environment.

Virtual 3D content that appears to attach to (and be obscured by parts of) the user’s real face.

A simple robot character whose facial expression is animated to match that of the user.

Link

An intro video can be found here

7 years ago
Holo Messenger
Holo Messenger
Holo Messenger
Holo Messenger

Holo Messenger

Developer Abhishek Singh is creating an AR recorded video messaging app with iOS ARKit that is presented in classic Sci-Fi settings (and all the recording is done with a single normal camera):

Remember Princess Leia’s classic holographic message from Star Wars? Well I built this app using ARKit and some awesome tech from Aifi.io that allows you to record and send your own. If you want to know when it becomes available, head over here: http://bit.ly/holomsngr

Link

7 years ago
Artificial Intelligence Uses Less Than Two Minutes Of Videogame Footage To Recreate Game Engine
Artificial Intelligence Uses Less Than Two Minutes Of Videogame Footage To Recreate Game Engine

Artificial Intelligence Uses Less Than Two Minutes of Videogame Footage to Recreate Game Engine

This story is already doing the rounds but is still very interesting - Machine Learning research from Georgia Tech manages to clone game design from a video recording.

The top GIF is the reconstructed clone, the bottom gif is from the video recording:

Georgia Institute of Technology researchers have developed a new approach using an artificial intelligence to learn a complete game engine, the basic software of a game that governs everything from character movement to rendering graphics.

Their AI system watches less than two minutes of gameplay video and then builds its own model of how the game operates by studying the frames and making predictions of future events, such as what path a character will choose or how enemies might react.

To get their AI agent to create an accurate predictive model that could account for all the physics of a 2D platform-style game, the team trained the AI on a single “speedrunner” video, where a player heads straight for the goal. This made “the training problem for the AI as difficult as possible.”

Their current work uses Super Mario Bros. and they’ve started replicating the experiments with Mega Man and Sonic the Hedgehog as well. The same team first used AI and Mario Bros. gameplay video to create unique game level designs.

More Here

7 years ago
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks

Interactive Example Based Terrain Authoring with Conditional Adversarial Networks

Graphics research from LIRIS, Purdue University and Ubisoft is method of generating 3D landscape terrain from simple pen markings with the assistance of neural networks:

Authoring virtual terrains presents a challenge and there is a strong need for authoring tools able to create realistic terrains with simple user-inputs and with high user control. We propose an example-based authoring pipeline that uses a set of terrain synthesizers dedicated to specific tasks. Each terrain synthesizer is a Conditional Generative Adversarial Network trained by using real-world terrains and their sketched counterparts. The training sets are built automatically with a view that the terrain synthesizers learn the generation from features that are easy to sketch. During the authoring process, the artist first creates a rough sketch of the main terrain features, such as rivers, valleys and ridges, and the algorithm automatically synthesizes a terrain corresponding to the sketch using the learned features of the training samples. Moreover, an erosion synthesizer can also generate terrain evolution by erosion at a very low computational cost. Our framework allows for an easy terrain authoring and provides a high level of realism for a minimum sketch cost. We show various examples of terrain synthesis created by experienced as well as inexperienced users who are able to design a vast variety of complex terrains in a very short time. 

Link

7 years ago

Continuing from my previous post, a little FYI …

You can download your model and upload it to @sketchfab

The example above was created using this current Tumblr Radar image from @made

7 years ago
Example-based Face Stylization - Web Demo
Example-based Face Stylization - Web Demo

Example-based Face Stylization - Web Demo

DCGI and Adobe Research have put up an online interactive demo of their stylized facial animation paper.

Just drag and drop an image with a face into it, select one of the styles on the right, hit ‘Submit’ and see what happens …

Try it out for yourself here

7 years ago
Nikola Tesla Describing His Idea Of Transmitting Wireless Energy Through The Earth In 1893.

Nikola Tesla Describing His Idea of Transmitting Wireless Energy Through the Earth In 1893.

“I cannot stretch my imagination se far, but I do firmly believe that it is practicable to disturb by means of powerful machines the electrostatic condition of the earth and thus transmit intelligible signals and perhaps power. In fact, what is there against the carrying out of such a scheme? We now know that electric vibration may be transmitted through a single conductor. Why then not try to avail ourselves of the earth for this purpose? We need not be frightened by the idea of distance. To the weary wanderer counting the mile-posts the earth may appear very large, but to that happiest of all men, the astronomer, who gazes at the heavens and by their standard judges the magnitude of our globe, it appears very small. And so I think it must seem to the electrician, for when he considers the speed with which an electric disturbance is propagated through the earth all his ideas of distance must completely vanish.”

–Nikola Tesla

“On Light And Other High Frequency Phenomena.” Lecture delivered before the Franklin Institute, Philadelphia, February 1893, and before the National Electric Light Association, St. Louis, March 1893.

7 years ago

Inside the Blockchain Factory: How IBM's Distributed Ledger Work Went Global

IBM is building its blockchain work over a growing number of locations and employees, and Marie Wieck ties it all together. from CoinDesk http://ift.tt/2xbXrkC Donate Bitcoins 191LaSo6DsQFFMr9NQjyHBeYKLogfEYkBa

7 years ago
3D Face Reconstruction From A Single Image
3D Face Reconstruction From A Single Image
3D Face Reconstruction From A Single Image
3D Face Reconstruction From A Single Image

3D Face Reconstruction from a Single Image

Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:

3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.       

There is an online demo which will let you upload an image to convert and even save as a 3D model here

Link

7 years ago
Augmented Reality Climbing Wall - Full Video
Augmented Reality Climbing Wall - Full Video
Augmented Reality Climbing Wall - Full Video
Augmented Reality Climbing Wall - Full Video

Augmented reality climbing wall - Full video

7 years ago
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google
Sony Has Opened Pre-orders For Its $840 SmartEyeglasses — Complete With A Bulky Frame That Makes Google

Sony has opened pre-orders for its $840 SmartEyeglasses — complete with a bulky frame that makes Google Glass look positively unobtrusive.

7 years ago
Struggle No More – Virtually Choose Your Nail Polish Color

Struggle no more – Virtually choose your nail polish color

In an effort to solve the trial-by-purchase problem, the nail gurus at Sally Hansen are introducing a new app which lets you virtually paint on nail polish.

With ManiMatch, there’s no need to upload a photo or take a picture. Launch the app and put your hand in front of the camera and it starts scanning to determine your skin tone in order to provide color recommendations. Choose one, and the app paints the color right onto your nails then, Voila! Your nails on the screen.

image
image
image
7 years ago

UCLA unveils augmented reality teaching sandbox that lets you sculpt mountains, canyons and rivers, then fill them with water or even create erupting volcanoes.

7 years ago

I did the thing! - From the Star Wars app

Explore Tumblr Blog
Search Through Tumblr Tags