Kinetic sculpture by Jennifer Townley is an iteration of a previous piece from 2015 yet still visually complex and mesmerizing:
‘Asinas II’ is the successor of the original sculpture 'Asinas’, showing the same concept and overall appearance but a different shape for the white “wing” parts.
The various angles and curves of the individual parts create an elaborated unity when joined together on the shaft. The two “wings” formed by these seventy-seven parts are able to slide through each other and rotate in opposite direction at a slightly different speed. This results in a movement that appears to be far more complex, existing of multiple layers, where repetitive shapes seem to be moving within oneanother.
More Here
#BlueOrigin test launch was a success! | Our audience: #djiphantom4 #djiglobal #uav #yuneec #hexacopter #djiinspire1 #quadcopter #miniquad #affiliatemarketing #robotics #robot #amazon #fpv #drones #aerialphotography #amazonprime #robots #djiphantom #arduino #drone #tesla #elonmusk #rcplane #spacex #sparkfun #nasa #raspberrypi #mavicpro #jeffbezos via @theofficialblueorigin (at Van Horn, Texas)
A*STAR and NTU researchers have created a thin film material that allows them to control the size and density of magnetic skyrmions. In addition, they have also achieved electrical detection of these skyrmions. The fabrication process for these films is compatible with current industrial methods. This discovery is a breakthrough and is a key step towards the creation of a skyrmion-based memory device, which is one of the promising contenders for the next generation of memory technologies.
The discovery has been recently published in Nature Materials.
Skyrmions are small particle-like magnetic structures about 400 times smaller than a red blood cell. They can be created in magnetic materials, and their stability at small sizes makes them ideal candidates for memory devices. Since the discovery of room temperature skyrmions in 2015, there has been a global race to create a skyrmion memory device because such a device could potentially hold more information, while using less power.
The need for more memory
Increasingly large amounts of data are created daily in our rapidly digitalised world. Moreover, cutting-edge technologies such as the Internet of Things (IOT), edge computing, and Artificial Intelligence (AI) require immediate processing of this data for effective performance. This requires the development of memory devices with increasingly higher capacities.
Read more.
Machine Learning investigation from samim examines body language in video using recently released open-source code library OpenPose:
From Gene Kelly’s Step-Dance to Bruce Lee’s Kung-Fu — iconic movement has made history. Communicating through Body Language is an ancient art form, currently evolving in fascinating ways: Computationally detecting human body language is becoming effective and accessible. This experiment explores enabling technologies, applications & implications.
For over 20 years, Motion Capture has enabled us to record actions of humans and then use that information to animate a digital character or analyse poses. While movie makers and game developers embraced such technologies — it until recently required expensive equipment which captured only few aspects of the overall performance.
Today, a new generation of machine learning based systems is making it possible to detect human body language directly from images. A growing number of research papers and open-source libraries addresses key aspects: Body, Hand, Face, Gaze Tracking. Identity, Gender, Age, Emotion and Muscle strain Detection. Action Classification & Prediction. We now can…
More Here
An eclipse occurs when the Moon temporarily blocks the light from the Sun. Within the narrow, 60- to 70-mile-wide band stretching from Oregon to South Carolina called the path of totality, the Moon completely blocked out the Sun’s face; elsewhere in North America, the Moon covered only a part of the star, leaving a crescent-shaped Sun visible in the sky.
During this exciting event, we were collecting your images and reactions online.
This composite image, made from 4 frames, shows the International Space Station, with a crew of six onboard, as it transits the Sun at roughly five miles per second during a partial solar eclipse from, Northern Cascades National Park in Washington. Onboard as part of Expedition 52 are: NASA astronauts Peggy Whitson, Jack Fischer, and Randy Bresnik; Russian cosmonauts Fyodor Yurchikhin and Sergey Ryazanskiy; and ESA (European Space Agency) astronaut Paolo Nespoli.
Credit: NASA/Bill Ingalls
The Bailey’s Beads effect is seen as the moon makes its final move over the sun during the total solar eclipse on Monday, August 21, 2017 above Madras, Oregon.
Credit: NASA/Aubrey Gemignani
This image from one of our Twitter followers shows the eclipse through tree leaves as crescent shaped shadows from Seattle, WA.
Credit: Logan Johnson
“The eclipse in the palm of my hand”. The eclipse is seen here through an indirect method, known as a pinhole projector, by one of our followers on social media from Arlington, TX.
Credit: Mark Schnyder
Through the lens on a pair of solar filter glasses, a social media follower captures the partial eclipse from Norridgewock, ME.
Credit: Mikayla Chase
While most of us watched the eclipse from Earth, six humans had the opportunity to view the event from 250 miles above on the International Space Station. European Space Agency (ESA) astronaut Paolo Nespoli captured this image of the Moon’s shadow crossing America.
Credit: Paolo Nespoli
This composite image shows the progression of a partial solar eclipse over Ross Lake, in Northern Cascades National Park, Washington. The beautiful series of the partially eclipsed sun shows the full spectrum of the event.
Credit: NASA/Bill Ingalls
In this video captured at 1,500 frames per second with a high-speed camera, the International Space Station, with a crew of six onboard, is seen in silhouette as it transits the sun at roughly five miles per second during a partial solar eclipse, Monday, Aug. 21, 2017 near Banner, Wyoming.
Credit: NASA/Joel Kowsky
To see more images from our NASA photographers, visit: https://www.flickr.com/photos/nasahqphoto/albums/72157685363271303
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
Another experiment from LuluXXX exploring Machine Learning visual outputs, combining black and white image colourization, slic superpixels image segmentation and style2paint manga colourization on footage of Aya-Bambi dancing:
mixing slic superpixels and style2paint. starts with version2 than version4 plus a little breakdown at the end and all 4 versions of the algorithm. music : Spettro from NadjaLind mix - https://soundcloud.com/nadjalind/nadja-lind-pres-new-lucidflow-and-sofa-sessions-meowsic-mix original footage : https://www.youtube.com/watch?v=N2YhzBjiYUg
Link
Machine Learning research from University of Nottingham School of Computer Science can generate a 3D model of a human face from an image using neural networks:
3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty. Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting. In this work, we propose to address many of these limitations by training a Convolutional Neural Network (CNN) on an appropriate dataset consisting of 2D images and 3D facial models or scans. Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model. We achieve this via a simple CNN architecture that performs direct regression of a volumetric representation of the 3D facial geometry from a single 2D image. We also demonstrate how the related task of facial landmark localization can be incorporated into the proposed framework and help improve reconstruction quality, especially for the cases of large poses and facial expressions.
There is an online demo which will let you upload an image to convert and even save as a 3D model here
Link
5 Mysterious Posts Found On Reddit That STILL Remain Unexplained…
Installation from Shawn Hunt and Microsoft Vancouver combines 3D Printing, robotics, Hololens Mixed Reality and indigenous symbolism:
The Raven, the ultimate trickster, has become a cyborg. In this Creative Collab, Shawn Hunt moves away from engaging with the handmade; exploring authenticity and our expectations of what it means to be indigenous through the removal of the hand-carved surface. The work Transformation Mask, features Microsoft HoloLens, creating an experiential sculpture piece that engages with mixed reality.
In this work, the mask appropriates the traditional aspects of metamorphosis with the transformation from bird mask to human, yet in this adaptation the human mask has been altered, upgraded, and merged with the machine. Incorporating aspects of technology, sound and space, each part of the work reflects Hunt’s interest in how we understand and identify with the term indigenous.
This work presents a new trajectory for engagement and exploration of First Nations practice; one that points towards technology and innovation as aspects that expand traditional practices and open new avenues for interpretation.
More Here
Azért szeretek egyszerre hosszabb szabadságot kivenni, mert ilyenkor az első 3-4 napot felölelő elégedett faszlengetés után elöntenek az alkotási vágy hullámai, és a szabadságom hátralevő ideje alatt kötelezettségek nélkül tudok hódolni a hobbijaimnak…
Ma este például Matlaboztam kicsit, melynek eredményeképpen a fenti kis animációt állítottam össze az ilyesmire fogékony olvasóknak.
Mint azt páran már kitalálhattátok, a kisfilm a Viola-Jones-féle, egyszintű döntési fákon alapuló AdaBoost tanulóalgoritmus konvergenciáját szemlélteti amint normális eloszlású adatsorokat próbál modellezni.
(a témáról lásd még: http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf )
Coding project from Andrew Hart demonstrates how ARkit for iOS can apply Augmented Reality for geolocation guidance:
ARKit + CoreLocation pic.twitter.com/nTdKyGrBmv
— Andrew Hart (@AndrewProjDent)
July 17, 2017
ARKit + CoreLocation, part 2 pic.twitter.com/AyQiFyzlj3
— Andrew Hart (@AndrewProjDent)
July 21, 2017
Andrew has said that the source code for this project will be up on Github soon (possibly later next week)