Currently being installed in Times Square, Radius Displays have created a shape changing screen made from an array of motorized panels:
A post shared by Radius (@radiusdisplays) on Jun 16, 2017 at 1:12pm PDT
There is very little information to find about the project, other than it is currently installed and testing and officially launching soon - the best bet is to keep an eye on Instagram here
It will make an official debut on August the 8th [Link]
Had an interesting chat with an AI today
“I cannot stretch my imagination se far, but I do firmly believe that it is practicable to disturb by means of powerful machines the electrostatic condition of the earth and thus transmit intelligible signals and perhaps power. In fact, what is there against the carrying out of such a scheme? We now know that electric vibration may be transmitted through a single conductor. Why then not try to avail ourselves of the earth for this purpose? We need not be frightened by the idea of distance. To the weary wanderer counting the mile-posts the earth may appear very large, but to that happiest of all men, the astronomer, who gazes at the heavens and by their standard judges the magnitude of our globe, it appears very small. And so I think it must seem to the electrician, for when he considers the speed with which an electric disturbance is propagated through the earth all his ideas of distance must completely vanish.”
“On Light And Other High Frequency Phenomena.” Lecture delivered before the Franklin Institute, Philadelphia, February 1893, and before the National Electric Light Association, St. Louis, March 1893.
This house is being 3-D printed with human and robot construction. Mesh mould technology uses the precision of robot building capacities to eliminate waste.
follow @the-future-now
Programming project from Or Fleisher and Anastasis Germanidis combines Augmented Reality and Machine Learning, using a Neural Net trained for age prediction through mobile camera device:
‘Death-Mask’ predicts how long people have to live and overlays that in the form of a “clock” above they’re heads in augmented reality. The project uses a machine learning model titled AgeNet for the prediction process. Once predicted it uses the average life expectancy in that location to try and estimate how long one has left.
The aesthetic inspiration derives from the concept of death masks. These are sculptures meant to symbolize the death of a person by casting his face into a sculpture (i.e mask).
The experiment uses ARKit to render the visual content in augmented reality on an iPad and CoreML to run the machine learning model in real-time. The project is by no means an accurate representation of one’s life expectancy and is more oriented towards the examination of public information in augmented reality in the age of deep learning.
Link
Graphics research from Daniel Sýkora et al at DCGI, Czech Republic, presents a method of realtime style transfer focused on human faces, similar to their previous StyLit work.
It should be noted that the video below is a demonstration of results, is silent, and the official paper has not been made public yet:
Results video for the paper: Fišer et al.: Example-Based Synthesis of Stylized Facial Animations, to appear in ACM Transactions on Graphics 36(4):155, SIGGRAPH 2017.
[EDIT: 20 July 2017]
An official video (no audio) and project page has been made public on the project:
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject’s identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Link
UCLA unveils augmented reality teaching sandbox that lets you sculpt mountains, canyons and rivers, then fill them with water or even create erupting volcanoes.
Another machine learning experiment from Samim explores regression method to moving image, breaking down each frame into visual compartments creating a polygon / Modernist style:
Regression is a widely applied technique in machine learning … Regression analysis is a statistical process for estimating the relationships among variables. Lets have some fun with it ;-)
… This experiment test a regression based approach for video stylisation. The following video was generated using Stylize by Alec Radford. Alec extends Andrej’s implementation and uses a fast Random Forest Regressor. The source video is a short by JacksGap.
You can find out more about the machine learning experiment here
Intel Core with Radeon RX Vega M Graphics Launched: HP, Dell, and Intel NUC http://ift.tt/2CQpCuH
Consiste en una carga de aire comprimido que se libera para “apretar” la rueda delantera contra el suelo cuando se detecta una pérdida de agarre en el tren delantero.
Online project from Qosmo generates ambient sounds to Google Streetview panoramas through Deep Learning processes, interpreting the visuals for appropriate sounds:
“Imaginary Soundscape” is a web-based sound installation, in which viewers can freely walk around Google Street View and immerse themselves into imaginary soundscape generated with deep learning models.
… Once trained, the rest was straightforward. For a given image from Google Street View, we can find the best-matched sound file from a pre-collected sound dataset, such that the output of SoundNet with the sound input is the most similar to the output of the CNN model for the image. As the sound dataset, we collected 15000 sound files from internet published under Creative Commons license and filtered with another CNN model on spectrogram trained to distinguish environmental/ambient sound from other types of audio (music, speech, etc.).
You can try it out for yourself here, and find more background information here