Silicon valley entrepreneur and novelist Rob Reid takes on artificial intelligence — and how it might end the world — in his weird, funny techno-philosophical thriller, After On.
Critic Jason Sheehan says, “It’s like an extended philosophy seminar run by a dozen insane Cold War heads-of-station, three millenial COOs and that guy you went to college with who always had the best weed but never did his laundry.”
‘After On’ Sees The End Of The World In A Dating App
How To Buy Bitcoins In India | A Step-By-Step Guide Find more Bitcoin mining rig reviews: http://bitcoinist.net
Hey, he is running away….????? #love #instagood #photooftheday #beautiful #fashion #happy #tbt #cute #followme #like4like #selfie #summer #fun #smile #style #amazing #sun #bestoftheday #pretty #cool #funny #ootd #potd #holiday #lifestyle #일상 #sweet #happiness #awesome #travel
Cyberpunk Street by Yoshimitszu
Say what you want about the animation, but they did add a lot of little cute details in the romances.
Azért szeretek egyszerre hosszabb szabadságot kivenni, mert ilyenkor az első 3-4 napot felölelő elégedett faszlengetés után elöntenek az alkotási vágy hullámai, és a szabadságom hátralevő ideje alatt kötelezettségek nélkül tudok hódolni a hobbijaimnak…
Ma este például Matlaboztam kicsit, melynek eredményeképpen a fenti kis animációt állítottam össze az ilyesmire fogékony olvasóknak.
Mint azt páran már kitalálhattátok, a kisfilm a Viola-Jones-féle, egyszintű döntési fákon alapuló AdaBoost tanulóalgoritmus konvergenciáját szemlélteti amint normális eloszlású adatsorokat próbál modellezni.
(a témáról lásd még: http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-2001-1.pdf )
https://github.com/yahoo/samoa
Machine learning and data mining are well established techniques in the world of IT and especially among web companies and startups. Spam detection, personalization and recommendations are just a few of the applications made possible by mining the huge quantity of data available nowadays. However, “big data” is not only about Volume, but also about Velocity (and Variety, 3V of big data).
The usual pipeline for modeling data (what “data scientists” do) involves taking a sample from production data, cleaning and preprocessing it to make it usable, training a model for the task at hand and finally deploying it to production. The final output of this process is a pipeline that needs to run periodically (and be maintained) in order to keep the model up to date. Hadoop and its ecosystem (e.g., Mahout) have proven to be an extremely successful platform to support this process at web scale.
However, no solution is perfect and big data is “data whose characteristics forces us to look beyond the traditional methods that are prevalent at the time”. The current challenge is to move towards analyzing data as soon as it arrives into the system, nearly in real-time.
For example, models for mail spam detection get outdated with time and need to be retrained with new data. New data (i.e., spam reports) comes in continuously and the model starts being outdated the moment it is deployed: all the new data is sitting without creating any value until the next model update. On the contrary, incorporating new data as soon as it arrives is what the “Velocity” in big data is about. In this case, Hadoop is not the ideal tool to cope with streams of fast changing data.
Distributed stream processing engines are emerging as the platform of choice to handle this use case. Examples of these platforms are Storm, S4, and recently Samza. These platforms join the scalability of distributed processing with the fast response of stream processing. Yahoo has already adopted Storm as a key technology for low-latency big data processing.
Alas, currently there is no common solution for mining big data streams, that is, for doing machine learning on streams on a distributed environment.
SAMOA (Scalable Advanced Massive Online Analysis) is a framework for mining big data streams. As most of the big data ecosystem, it is written in Java. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm and S4. SAMOA includes distributed algorithms for the most common machine learning tasks such as classification and clustering. For a simple analogy, you can think of SAMOA as Mahout for streaming.
SAMOA is both a platform and a library. As a platform, it allows the algorithm developer to abstract from the underlying execution engine, and therefore reuse their code to run on different engines. It also allows to easily write plug-in modules to port SAMOA to different execution engines.
As a library, SAMOA contains state-of-the-art implementations of algorithms for distributed machine learning on streams. The first alpha release allows classification and clustering.
For classification, we implemented a Vertical Hoeffding Tree (VHT), a distributed streaming version of decision trees tailored for sparse data (e.g., text). For clustering, we included a distributed algorithm based on CluStream. The library also includes meta-algorithms such as bagging.
An algorithm in SAMOA is represented by a series of nodes communicating via messages along streams that connect pairs of nodes (a graph). Borrowing the terminology from Storm, this is called a Topology. Each node in the Topology is a Processor that sends messages to a Stream. The user code that implements the algorithm resides inside a Processor. Figure 3 shows an example of a Processor joining two stream from two source Processors. Here is a code snippet to build such a topology in SAMOA.
TopologyBuilder builder; Processor sourceOne = new SourceProcessor(); builder.addProcessor(sourceOne); Stream streamOne = builder.createStream(sourceOne); Processor sourceTwo = new SourceProcessor(); builder.addProcessor(sourceTwo); Stream streamTwo = builder.createStream(sourceTwo); Processor join = new JoinProcessor(); builder.addProcessor(join).connectInputShuffle(streamOne).connectInputKey(streamTwo);
1. Download SAMOA
git clone git@github.com:yahoo/samoa.git cd samoa mvn -Pstorm package
2. Download the Forest CoverType dataset.
wget "http://downloads.sourceforge.net/project/moa-datastream/Datasets/Classification/covtypeNorm.arff.zip" unzip covtypeNorm.arff.zip
Forest CoverType contains the forest cover type for 30 x 30 meter cells obtained from US Forest Service (USFS) Region 2 Resource Information System (RIS) data. It contains 581,012 instances and 54 attributes, and it has been used in several papers on data stream classification.
3. Download a simple logging library.
wget "http://repo1.maven.org/maven2/org/slf4j/slf4j-simple/1.7.2/slf4j-simple-1.7.2.jar"
4. Run an Example. Classifying the CoverType dataset with the VerticalHoeffdingTree in local mode.
java -cp slf4j-simple-1.7.2.jar:target/SAMOA-Storm-0.0.1.jar com.yahoo.labs.samoa.DoTask "PrequentialEvaluation -l classifiers.trees.VerticalHoeffdingTree -s (ArffFileStream -f covtypeNorm.arff) -f 100000"
The output will be a sequence of the evaluation metrics for accuracy, taken every 100,000 instances.
To run the example on Storm, please refer to the instructions on the wiki.
For more information about SAMOA, see the README and the wiki on github, or post a question on the mailing list.
SAMOA is licensed under an Apache Software License v2.0. You are welcome to contribute to the project! SAMOA accepts contributions under an Apache style contributor license agreement.
Good luck! We hope you find SAMOA useful. We will continue developing the framework by adding new algorithms and platforms.
Gianmarco De Francisci Morales (gdfm@yahoo-inc.com) and Albert Bifet (abifet@yahoo.com) @ Yahoo Labs Barcelona
Graphics research from Daniel Sýkora et al at DCGI, Czech Republic, presents a method of realtime style transfer focused on human faces, similar to their previous StyLit work.
It should be noted that the video below is a demonstration of results, is silent, and the official paper has not been made public yet:
Results video for the paper: Fišer et al.: Example-Based Synthesis of Stylized Facial Animations, to appear in ACM Transactions on Graphics 36(4):155, SIGGRAPH 2017.
[EDIT: 20 July 2017]
An official video (no audio) and project page has been made public on the project:
We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject’s identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.
Link
Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:
We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.
More Here
Google Translate writes weird poetry if you repeat random characters.
(Above, my own experiments. Inspired by https://twitter.com/smutclyde )