Installation by David Bowen reproduces realtime wind data with a collection of mechanized stalks:
This installation consists of a series of 126 x/y tilting mechanical devices connected to thin dried plant stalks installed in a gallery and a dried plant stalk connected to an accelerometer installed outdoors. When the wind blows it causes the stalk outside to sway. The accelerometer detects this movement transmitting the motion to the grouping of devices in the gallery. Therefore the stalks in the gallery space move in real-time and in unison based on the movement of the wind outside.
May-September 2018 a newly expanded version of tele-present wind was installed at Azkuna Zentroa, Bilbao and the sensor was installed in an outdoor location adjacent to the Visualization and Digital Imaging Lab at the University of Minnesota. Thus the individual components of the installation in Spain moved in unison as they mimicked the direction and intensity of the wind halfway around the world. As it monitored and collected real-time data from this remote and distant location, the system relayed a physical representation of the dynamic and fluid environmental conditions.
More Here
Related: Another project by David from 2012 did something similar with ‘Tele-Present Water’ [Link]
Virtual Genetic Code
Video from deepython demonstrates an object recognition neural network framework applied to footage taken in New York:
This is a state of the art object detection framework called Faster R-CNN described here https://arxiv.org/abs/1506.01497 using tensorflow.
I took the following video and fed it through Tensorflow Faster R-CNN model, this isn’t running on an embedded device yet.
Link
Graphics research from LIRIS, Purdue University and Ubisoft is method of generating 3D landscape terrain from simple pen markings with the assistance of neural networks:
Authoring virtual terrains presents a challenge and there is a strong need for authoring tools able to create realistic terrains with simple user-inputs and with high user control. We propose an example-based authoring pipeline that uses a set of terrain synthesizers dedicated to specific tasks. Each terrain synthesizer is a Conditional Generative Adversarial Network trained by using real-world terrains and their sketched counterparts. The training sets are built automatically with a view that the terrain synthesizers learn the generation from features that are easy to sketch. During the authoring process, the artist first creates a rough sketch of the main terrain features, such as rivers, valleys and ridges, and the algorithm automatically synthesizes a terrain corresponding to the sketch using the learned features of the training samples. Moreover, an erosion synthesizer can also generate terrain evolution by erosion at a very low computational cost. Our framework allows for an easy terrain authoring and provides a high level of realism for a minimum sketch cost. We show various examples of terrain synthesis created by experienced as well as inexperienced users who are able to design a vast variety of complex terrains in a very short time.
Link
Portal in AR looks amaaaazing!
Unfortunately this is just a demo on HoloLens by developer KennyW, but here’s hoping it comes to life one day.
Latest project from @mario-klingemann employs Neural Networks trained on a collection of archive footage to recreate videos using the dataset.
It is confirmed that no human intervention has occured in the processed output, and it is interesting where there are convincing connections between the two (and where there apparently are none):
Destiny Pictures, Alternative Neural Edit, Side by Side Version
This movie has been automatically collaged by an neural algorithm using the movie that Donald Trump’s gave as a present to Kim Jong Un as the template, replacing all scenes with visually similar scenes from public domain movies found in the internet archive.
Neural Remake of “Take On Me” by A-Ha
An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage. The process is fully automatic, there are no manual edits.
Neural Reinterpretation of “Sabotage” by the Beastie Boys
An AI automatically detects the scenes in the source video clip and then replaces them with similar looking archival footage.
There are other video examples at Mario’s YouTube page (but some may not be viewable due to music copyright.
If you follow Mario’s Twitter timeline, you can get updated with the latest examples, and follow the evolution of the project [link]
slide the light off you you may find some peace
Research from Columbia Computer Graphics Group can create textual encryption by minute altering of font characteristics using neural networks:
We introduce FontCode, an information embedding technique for text documents. Provided a text document with specific fonts, our method embeds user-specified information in the text by perturbing the glyphs of text characters while preserving the text content. We devise an algorithm to choose unobtrusive yet machine-recognizable glyph perturbations, leveraging a recently developed generative model that alters the glyphs of each character continuously on a font manifold. We then introduce an algorithm that embeds a user-provided message in the text document and produces an encoded document whose appearance is minimally perturbed from the original document. We also present a glyph recognition method that recovers the embedded information from an encoded document stored as a vector graphic or pixel image, or even on a printed paper. In addition, we introduce a new error-correction coding scheme that rectifies a certain number of recognition errors. Lastly, we demonstrate that our technique enables a wide array of applications, using it as a text document metadata holder, an unobtrusive optical barcode, a cryptographic message embedding scheme, and a text document signature.
More Here
the clicking sound of the rack is oddly satisfying.
https://instagram.com/p/BQIDI5eh5_m/
I can not argue with that
This house is being 3-D printed with human and robot construction. Mesh mould technology uses the precision of robot building capacities to eliminate waste.
follow @the-future-now