‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes

‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes
‘Overwatch’ Fan Chris Guyot Created Adorable, Minimalist Animations Of The Game’s Heroes

‘Overwatch’ fan Chris Guyot created adorable, minimalist animations of the game’s heroes

follow @the-future-now

More Posts from Laossj and Others

8 years ago
Please Meet Our Terrible Fur Sons
Please Meet Our Terrible Fur Sons
Please Meet Our Terrible Fur Sons

Please meet our terrible fur sons

Using Google’s open-source machine-learning project called Tensorflow, Christopher Hesse created edges2cats, where you draw a design and it’s filled in with… cat. It makes monsters.

8 years ago

Open Sourcing a Deep Learning Solution for Detecting NSFW Images

By Jay Mahadeokar and Gerry Pesavento

Automatically identifying that an image is not suitable/safe for work (NSFW), including offensive and adult images, is an important problem which researchers have been trying to tackle for decades. Since images and user-generated content dominate the Internet today, filtering NSFW images becomes an essential component of Web and mobile applications. With the evolution of computer vision, improved training data, and deep learning algorithms, computers are now able to automatically classify NSFW image content with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

To the best of our knowledge, there is no open source model or algorithm for identifying NSFW images. In the spirit of collaboration and with the hope of advancing this endeavor, we are releasing our deep learning model that will allow developers to experiment with a classifier for NSFW detection, and provide feedback to us on ways to improve the classifier.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

image

Convolutional Neural Network (CNN) architectures and tradeoffs

In recent years, CNNs have become very successful in image classification problems [1] [5] [6]. Since 2012, new CNN architectures have continuously improved the accuracy of the standard ImageNet classification challenge. Some of the major breakthroughs include AlexNet (2012) [6], GoogLeNet [5], VGG (2013) [2] and Residual Networks (2015) [1]. These networks have different tradeoffs in terms of runtime, memory requirements, and accuracy. The main indicators for runtime and memory requirements are:

Flops or connections – The number of connections in a neural network determine the number of compute operations during a forward pass, which is proportional to the runtime of the network while classifying an image.

Parameters -–The number of parameters in a neural network determine the amount of memory needed to load the network.

Ideally we want a network with minimum flops and minimum parameters, which would achieve maximum accuracy.

Training a deep neural network for NSFW classification

We train the models using a dataset of positive (i.e. NSFW) images and negative (i.e. SFW – suitable/safe for work) images. We are not releasing the training images or other details due to the nature of the data, but instead we open source the output model which can be used for classification by a developer.

We use the Caffe deep learning library and CaffeOnSpark; the latter is a powerful open source framework for distributed learning that brings Caffe deep learning to Hadoop and Spark clusters for training models (Big shout out to Yahoo’s CaffeOnSpark team!).

While training, the images were resized to 256x256 pixels, horizontally flipped for data augmentation, and randomly cropped to 224x224 pixels, and were then fed to the network. For training residual networks, we used scale augmentation as described in the ResNet paper [1], to avoid overfitting. We evaluated various architectures to experiment with tradeoffs of runtime vs accuracy.

MS_CTC [4] – This architecture was proposed in Microsoft’s constrained time cost paper. It improves on top of AlexNet in terms of speed and accuracy maintaining a combination of convolutional and fully-connected layers.

Squeezenet [3] – This architecture introduces the fire module which contain layers to squeeze and then expand the input data blob. This helps to save the number of parameters keeping the Imagenet accuracy as good as AlexNet, while the memory requirement is only 6MB.

VGG [2] – This architecture has 13 conv layers and 3 FC layers.

GoogLeNet [5] – GoogLeNet introduces inception modules and has 20 convolutional layer stages. It also uses hanging loss functions in intermediate layers to tackle the problem of diminishing gradients for deep networks.

ResNet-50 [1] – ResNets use shortcut connections to solve the problem of diminishing gradients. We used the 50-layer residual network released by the authors.

ResNet-50-thin – The model was generated using our pynetbuilder tool and replicates the Residual Network paper’s 50-layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained here.

image

Tradeoffs of different architectures: accuracy vs number of flops vs number of params in network.

The deep models were first pre-trained on the ImageNet 1000 class dataset. For each network, we replace the last layer (FC1000) with a 2-node fully-connected layer. Then we fine-tune the weights on the NSFW dataset. Note that we keep the learning rate multiplier for the last FC layer 5 times the multiplier of other layers, which are being fine-tuned. We also tune the hyper parameters (step size, base learning rate) to optimize the performance.

We observe that the performance of the models on NSFW classification tasks is related to the performance of the pre-trained model on ImageNet classification tasks, so if we have a better pretrained model, it helps in fine-tuned classification tasks. The graph below shows the relative performance on our held-out NSFW evaluation set. Please note that the false positive rate (FPR) at a fixed false negative rate (FNR) shown in the graph is specific to our evaluation dataset, and is shown here for illustrative purposes. To use the models for NSFW filtering, we suggest that you plot the ROC curve using your dataset and pick a suitable threshold.

image

Comparison of performance of models on Imagenet and their counterparts fine-tuned on NSFW dataset.

We are releasing the thin ResNet 50 model, since it provides good tradeoff in terms of accuracy, and the model is lightweight in terms of runtime (takes < 0.5 sec on CPU) and memory (~23 MB). Please refer our git repository for instructions and usage of our model. We encourage developers to try the model for their NSFW filtering use cases. For any questions or feedback about performance of model, we encourage creating a issue and we will respond ASAP.

Results can be improved by fine-tuning the model for your dataset or use case. If you achieve improved performance or you have trained a NSFW model with different architecture, we encourage contributing to the model or sharing the link on our description page.

Disclaimer: The definition of NSFW is subjective and contextual. This model is a general purpose reference model, which can be used for the preliminary filtering of pornographic images. We do not provide guarantees of accuracy of output, rather we make this available for developers to explore and enhance as an open source project.

We would like to thank Sachin Farfade, Amar Ramesh Kamat, Armin Kappeler, and Shraddha Advani for their contributions in this work.

References:

[1] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition” arXiv preprint arXiv:1512.03385 (2015).

[2] Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.”; arXiv preprint arXiv:1409.1556(2014).

[3] Iandola, Forrest N., Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size.”; arXiv preprint arXiv:1602.07360 (2016).

[4] He, Kaiming, and Jian Sun. “Convolutional neural networks at constrained time cost.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5353-5360. 2015.

[5] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet,Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. “Going deeper with convolutions” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. 2015.

[6] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks” In Advances in neural information processing systems, pp. 1097-1105. 2012.

7 years ago
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks
Interactive Example Based Terrain Authoring With Conditional Adversarial Networks

Interactive Example Based Terrain Authoring with Conditional Adversarial Networks

Graphics research from LIRIS, Purdue University and Ubisoft is method of generating 3D landscape terrain from simple pen markings with the assistance of neural networks:

Authoring virtual terrains presents a challenge and there is a strong need for authoring tools able to create realistic terrains with simple user-inputs and with high user control. We propose an example-based authoring pipeline that uses a set of terrain synthesizers dedicated to specific tasks. Each terrain synthesizer is a Conditional Generative Adversarial Network trained by using real-world terrains and their sketched counterparts. The training sets are built automatically with a view that the terrain synthesizers learn the generation from features that are easy to sketch. During the authoring process, the artist first creates a rough sketch of the main terrain features, such as rivers, valleys and ridges, and the algorithm automatically synthesizes a terrain corresponding to the sketch using the learned features of the training samples. Moreover, an erosion synthesizer can also generate terrain evolution by erosion at a very low computational cost. Our framework allows for an easy terrain authoring and provides a high level of realism for a minimum sketch cost. We show various examples of terrain synthesis created by experienced as well as inexperienced users who are able to design a vast variety of complex terrains in a very short time. 

Link

7 years ago
Deep Video Portraits
Deep Video Portraits
Deep Video Portraits
Deep Video Portraits

Deep Video Portraits

Graphics research from Stanford University et al is the latest development in facial expression transfer visual puppetry, offering photorealistic and editable results:

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network – thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect. 

More Here

7 years ago
SP. Household Robot Calculates Optimal Move To Win Using Artificial Intelligence And Augmented Vision
SP. Household Robot Calculates Optimal Move To Win Using Artificial Intelligence And Augmented Vision
SP. Household Robot Calculates Optimal Move To Win Using Artificial Intelligence And Augmented Vision

SP. Household robot calculates optimal move to win using artificial intelligence and augmented vision capabilities but does not tell anyone.

Bicentennial Man (1999)

7 years ago
Checks
Checks
Checks
Checks
Checks
Checks
Checks

Checks

7 years ago
Just A Reminder For The Community!

Just a reminder for the community!

Run a Bitcoin Core 0.14.1 FullNode and support SegWit!

Support my electricity bill if you like: Bitcoin: 1FSZytTNZNqs69mSh5grU73DmrPVtBkz7m

7 years ago
Photo-editing App FaceApp Now Includes Black, Asian Indian And Caucasian Filters
Photo-editing App FaceApp Now Includes Black, Asian Indian And Caucasian Filters
Photo-editing App FaceApp Now Includes Black, Asian Indian And Caucasian Filters
Photo-editing App FaceApp Now Includes Black, Asian Indian And Caucasian Filters
Photo-editing App FaceApp Now Includes Black, Asian Indian And Caucasian Filters

Photo-editing app FaceApp now includes Black, Asian Indian and Caucasian filters

On Wednesday morning, the photo-editing app FaceApp released new photo filters that change the ethnic appearance of your face.

The app first became popular earlier in 2017 due to its ability to transform people into elderly versions of themselves and different genders.

These new options, however, will likely cause some outrage: The filters are Asian, Black, Caucasian and Indian.

Selfie apps like Snapchat have taken criticism for filters that apply “digital blackface.” In 2016, Snapchat released a Bob Marley filter that darkened the skin and gave users dreadlocks. Snapchat said another one of its 2016 filters was “inspired by anime,” but many people called it “yellowface,” as it seemingly turned the user into an Asian stereotype.

FaceApp’s newest filters, however, don’t pretend they’re anything but racial. Read more (8/9/17 12 PM)

follow @the-future-now

7 years ago
Silicon Valley Entrepreneur And Novelist Rob Reid Takes On Artificial Intelligence — And How It Might
Silicon Valley Entrepreneur And Novelist Rob Reid Takes On Artificial Intelligence — And How It Might

Silicon valley entrepreneur and novelist Rob Reid takes on artificial intelligence — and how it might end the world — in his weird, funny techno-philosophical thriller, After On.

Critic Jason Sheehan says, “It’s like an extended philosophy seminar run by a dozen insane Cold War heads-of-station, three millenial COOs and that guy you went to college with who always had the best weed but never did his laundry.”

‘After On’ Sees The End Of The World In A Dating App

7 years ago

Japan just sent the Int-Ball, a photo and video drone, to the International Space Station. Its mission is to document the astronauts. Previously, astronauts spent 10% of their time doing photo and video documentation. Int-Ball’s footage can be seen in real time.

follow @the-future-now

  • 101-random-thoughts
    101-random-thoughts liked this · 6 months ago
  • juh-brotas
    juh-brotas liked this · 1 year ago
  • threelastedmemories
    threelastedmemories liked this · 1 year ago
  • analogveins
    analogveins liked this · 1 year ago
  • panisestgladio
    panisestgladio reblogged this · 2 years ago
  • panisestgladio
    panisestgladio liked this · 2 years ago
  • phoeboi
    phoeboi liked this · 2 years ago
  • magickneesocks
    magickneesocks liked this · 3 years ago
  • misswonderfrojustice
    misswonderfrojustice liked this · 4 years ago
  • kotygrey
    kotygrey liked this · 4 years ago
  • cerezzzita
    cerezzzita liked this · 4 years ago
  • boyskylark
    boyskylark liked this · 4 years ago
  • brokenskelly
    brokenskelly liked this · 4 years ago
  • llaellaps
    llaellaps liked this · 4 years ago
  • n00kiez
    n00kiez reblogged this · 4 years ago
  • n00kiez
    n00kiez liked this · 4 years ago
  • beccaboxes
    beccaboxes reblogged this · 4 years ago
  • wormhaunt
    wormhaunt liked this · 5 years ago
  • a-single-worm
    a-single-worm reblogged this · 5 years ago
  • a-single-worm
    a-single-worm liked this · 5 years ago
  • icravepineapplepizzaanddeath
    icravepineapplepizzaanddeath liked this · 5 years ago
  • mothshroooms
    mothshroooms liked this · 5 years ago
  • uniqueflowerfield
    uniqueflowerfield reblogged this · 6 years ago
  • uniqueflowerfield
    uniqueflowerfield liked this · 6 years ago
  • blacksalt04
    blacksalt04 liked this · 6 years ago
  • sskyetokie
    sskyetokie liked this · 6 years ago
  • floomagub2890-blog
    floomagub2890-blog liked this · 6 years ago
  • proofofheart
    proofofheart liked this · 6 years ago
  • vikazd
    vikazd liked this · 6 years ago
  • sparkleskull665
    sparkleskull665 reblogged this · 6 years ago
  • sing-me-a-serenata
    sing-me-a-serenata liked this · 6 years ago
  • reddavocado
    reddavocado liked this · 6 years ago
  • dragongodfucker312
    dragongodfucker312 liked this · 6 years ago
  • formboy1
    formboy1 reblogged this · 6 years ago
  • askradicalgoodspeed
    askradicalgoodspeed reblogged this · 6 years ago
  • ashestoashesvvi
    ashestoashesvvi reblogged this · 6 years ago
  • captainkingsley
    captainkingsley liked this · 6 years ago
  • k0zzy-w0zzy
    k0zzy-w0zzy reblogged this · 6 years ago
  • t-ehyung
    t-ehyung liked this · 6 years ago
  • scarysquadmother
    scarysquadmother liked this · 6 years ago
laossj - 无标题
无标题

295 posts

Explore Tumblr Blog
Search Through Tumblr Tags