Release from IBM Research is a Computer Vision dataset for a realtime gesture recognition system, notable for the minimal representation visualizations:
This dataset was used to build the real-time, gesture recognition system described in the CVPR 2017 paper titled “A Low Power, Fully Event-Based Gesture Recognition System.” The data was recorded using a DVS128. The dataset contains 11 hand gestures from 29 subjects under 3 illumination conditions and is released under a Creative Commons Attribution 4.0 license.
More Here
EDIT - Here is a brief video explanation:
Update to project from kidach1 of a game which features enemies with optical camouflage which you can uncover with filters (and it is also possible to play multiplayer):
You can keep track of progress at Twitter or Patreon
Programming project from Or Fleisher and Anastasis Germanidis combines Augmented Reality and Machine Learning, using a Neural Net trained for age prediction through mobile camera device:
‘Death-Mask’ predicts how long people have to live and overlays that in the form of a “clock” above they’re heads in augmented reality. The project uses a machine learning model titled AgeNet for the prediction process. Once predicted it uses the average life expectancy in that location to try and estimate how long one has left.
The aesthetic inspiration derives from the concept of death masks. These are sculptures meant to symbolize the death of a person by casting his face into a sculpture (i.e mask).
The experiment uses ARKit to render the visual content in augmented reality on an iPad and CoreML to run the machine learning model in real-time. The project is by no means an accurate representation of one’s life expectancy and is more oriented towards the examination of public information in augmented reality in the age of deep learning.
Link
Silicon valley entrepreneur and novelist Rob Reid takes on artificial intelligence — and how it might end the world — in his weird, funny techno-philosophical thriller, After On.
Critic Jason Sheehan says, “It’s like an extended philosophy seminar run by a dozen insane Cold War heads-of-station, three millenial COOs and that guy you went to college with who always had the best weed but never did his laundry.”
‘After On’ Sees The End Of The World In A Dating App
Continuing from my previous post, a little FYI …
You can download your model and upload it to @sketchfab
The example above was created using this current Tumblr Radar image from @made
Hackaday Prize Best Product Finalist: Reconfigurable Robots http://ift.tt/2uB4Acd
Developer 应高选 has been experimenting with 4DViews’ free 4D captures and shares the results - particularly striking is this one using the new Apple ARKit and Unity software:
4DAR with ARKit and Unity3D, real man and real scale. iPhone6s test.
Here is an example using the same assets at a smaller scale:
应高选’s YouTube channel can be found here
4DViews on PK (from last week) Here
[Update 10/07/17]
Here is a video from 4DAR demonstrating how to put together your ownin 5 minutes:
Almost realtime visual tutorial on using 4DViews volumetric capture sequence with Unity and Apple ARKit, for fast hologram display
SP. Artificial intelligence software calculates the best pose for selling the product and demands it from the model.
Looker (1981)
Machine gun position on the German R-class Zeppelin ‘LZ 63’, 1916-17
via reddit
Installation by teamVOID uses industrial robots to perform life drawings alongside human artists:
‘Way to Artist’ has the purpose of rethinking the process of artistic creation through a comparison of robot and human actions. Drawing is assumed to be a creative activity that only humans are capable of. Nowadays, however, the emergence of artificial intelligence has some believing that artwork could be created by robots. In connection with this, the work involves drawings executed by a robot and a human, each with different drawing skills. In the process, it reconsiders the general meaning of the drawing activity.
Whilst this isn’t the first example of this type of setup, it isn’t clear whether the robots have any visual interpretation model, so this could be a metaphorical rather than technical presentation.
Link
Yep. That was quick. In a certain way.