Today I Witnessed This Tragic Moment At The Museum Of Communication #robotfriend

today i witnessed this tragic moment at the museum of communication #robotfriend

More Posts from Laossj and Others

7 years ago
100 Cans Of Spray Paint, 60 Hours Of Painting, 24 Individual Frames
100 Cans Of Spray Paint, 60 Hours Of Painting, 24 Individual Frames
100 Cans Of Spray Paint, 60 Hours Of Painting, 24 Individual Frames

100 cans of spray paint, 60 hours of painting, 24 individual frames

INSA is a graffiti artist who makes gif animations out of his physical art. Here he paints and animates the beautiful original painting by James Jean, which was created for Paramount’s new movie mother!

Click here to watch INSA bring this painting to life

7 years ago
The Artificial Intelligence Boom Is Here. Here’s How It Could Change The World Around Us.
The Artificial Intelligence Boom Is Here. Here’s How It Could Change The World Around Us.
The Artificial Intelligence Boom Is Here. Here’s How It Could Change The World Around Us.

The artificial intelligence boom is here. Here’s how it could change the world around us.

A future with highways full of self-driving cars or robot friends that can actually hold a decent conversation may not be far away.

That’s because we’re living in the middle of an “artificial intelligence boom” — a time when machines are becoming more and more like the human brain.

That’s partly because of an emerging subcategory of AI called “deep learning.” It’s a process that’s often trying to mimic the human brain’s neocortex, which helps humans with language processing, sensory perception and other functions

From allowing us to be understand the Earth’s trees to teaching robots how to understand human life, deep learning is changing our world. Read more (5/26/17)

follow @the-future-now​

8 years ago

Open Sourcing a Deep Learning Solution for Detecting NSFW Images

By Jay Mahadeokar and Gerry Pesavento

Automatically identifying that an image is not suitable/safe for work (NSFW), including offensive and adult images, is an important problem which researchers have been trying to tackle for decades. Since images and user-generated content dominate the Internet today, filtering NSFW images becomes an essential component of Web and mobile applications. With the evolution of computer vision, improved training data, and deep learning algorithms, computers are now able to automatically classify NSFW image content with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

To the best of our knowledge, there is no open source model or algorithm for identifying NSFW images. In the spirit of collaboration and with the hope of advancing this endeavor, we are releasing our deep learning model that will allow developers to experiment with a classifier for NSFW detection, and provide feedback to us on ways to improve the classifier.

Our general purpose Caffe deep neural network model (Github code) takes an image as input and outputs a probability (i.e a score between 0-1) which can be used to detect and filter NSFW images. Developers can use this score to filter images below a certain suitable threshold based on a ROC curve for specific use-cases, or use this signal to rank images in search results.

image

Convolutional Neural Network (CNN) architectures and tradeoffs

In recent years, CNNs have become very successful in image classification problems [1] [5] [6]. Since 2012, new CNN architectures have continuously improved the accuracy of the standard ImageNet classification challenge. Some of the major breakthroughs include AlexNet (2012) [6], GoogLeNet [5], VGG (2013) [2] and Residual Networks (2015) [1]. These networks have different tradeoffs in terms of runtime, memory requirements, and accuracy. The main indicators for runtime and memory requirements are:

Flops or connections – The number of connections in a neural network determine the number of compute operations during a forward pass, which is proportional to the runtime of the network while classifying an image.

Parameters -–The number of parameters in a neural network determine the amount of memory needed to load the network.

Ideally we want a network with minimum flops and minimum parameters, which would achieve maximum accuracy.

Training a deep neural network for NSFW classification

We train the models using a dataset of positive (i.e. NSFW) images and negative (i.e. SFW – suitable/safe for work) images. We are not releasing the training images or other details due to the nature of the data, but instead we open source the output model which can be used for classification by a developer.

We use the Caffe deep learning library and CaffeOnSpark; the latter is a powerful open source framework for distributed learning that brings Caffe deep learning to Hadoop and Spark clusters for training models (Big shout out to Yahoo’s CaffeOnSpark team!).

While training, the images were resized to 256x256 pixels, horizontally flipped for data augmentation, and randomly cropped to 224x224 pixels, and were then fed to the network. For training residual networks, we used scale augmentation as described in the ResNet paper [1], to avoid overfitting. We evaluated various architectures to experiment with tradeoffs of runtime vs accuracy.

MS_CTC [4] – This architecture was proposed in Microsoft’s constrained time cost paper. It improves on top of AlexNet in terms of speed and accuracy maintaining a combination of convolutional and fully-connected layers.

Squeezenet [3] – This architecture introduces the fire module which contain layers to squeeze and then expand the input data blob. This helps to save the number of parameters keeping the Imagenet accuracy as good as AlexNet, while the memory requirement is only 6MB.

VGG [2] – This architecture has 13 conv layers and 3 FC layers.

GoogLeNet [5] – GoogLeNet introduces inception modules and has 20 convolutional layer stages. It also uses hanging loss functions in intermediate layers to tackle the problem of diminishing gradients for deep networks.

ResNet-50 [1] – ResNets use shortcut connections to solve the problem of diminishing gradients. We used the 50-layer residual network released by the authors.

ResNet-50-thin – The model was generated using our pynetbuilder tool and replicates the Residual Network paper’s 50-layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained here.

image

Tradeoffs of different architectures: accuracy vs number of flops vs number of params in network.

The deep models were first pre-trained on the ImageNet 1000 class dataset. For each network, we replace the last layer (FC1000) with a 2-node fully-connected layer. Then we fine-tune the weights on the NSFW dataset. Note that we keep the learning rate multiplier for the last FC layer 5 times the multiplier of other layers, which are being fine-tuned. We also tune the hyper parameters (step size, base learning rate) to optimize the performance.

We observe that the performance of the models on NSFW classification tasks is related to the performance of the pre-trained model on ImageNet classification tasks, so if we have a better pretrained model, it helps in fine-tuned classification tasks. The graph below shows the relative performance on our held-out NSFW evaluation set. Please note that the false positive rate (FPR) at a fixed false negative rate (FNR) shown in the graph is specific to our evaluation dataset, and is shown here for illustrative purposes. To use the models for NSFW filtering, we suggest that you plot the ROC curve using your dataset and pick a suitable threshold.

image

Comparison of performance of models on Imagenet and their counterparts fine-tuned on NSFW dataset.

We are releasing the thin ResNet 50 model, since it provides good tradeoff in terms of accuracy, and the model is lightweight in terms of runtime (takes < 0.5 sec on CPU) and memory (~23 MB). Please refer our git repository for instructions and usage of our model. We encourage developers to try the model for their NSFW filtering use cases. For any questions or feedback about performance of model, we encourage creating a issue and we will respond ASAP.

Results can be improved by fine-tuning the model for your dataset or use case. If you achieve improved performance or you have trained a NSFW model with different architecture, we encourage contributing to the model or sharing the link on our description page.

Disclaimer: The definition of NSFW is subjective and contextual. This model is a general purpose reference model, which can be used for the preliminary filtering of pornographic images. We do not provide guarantees of accuracy of output, rather we make this available for developers to explore and enhance as an open source project.

We would like to thank Sachin Farfade, Amar Ramesh Kamat, Armin Kappeler, and Shraddha Advani for their contributions in this work.

References:

[1] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep residual learning for image recognition” arXiv preprint arXiv:1512.03385 (2015).

[2] Simonyan, Karen, and Andrew Zisserman. “Very deep convolutional networks for large-scale image recognition.”; arXiv preprint arXiv:1409.1556(2014).

[3] Iandola, Forrest N., Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 1MB model size.”; arXiv preprint arXiv:1602.07360 (2016).

[4] He, Kaiming, and Jian Sun. “Convolutional neural networks at constrained time cost.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5353-5360. 2015.

[5] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet,Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. “Going deeper with convolutions” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9. 2015.

[6] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks” In Advances in neural information processing systems, pp. 1097-1105. 2012.

7 years ago

Channel your inner #WonderWoman and discover your coding powers! https://goo.gl/n0TMGq

7 years ago

Vimeo pitch of the founders of Ethereum, who want to use the Bitcoin architecture to reinvent the rest of our political economy—smart contracts, distributed corporations, and even decentralized political parties

7 years ago

Timelapse of Star Trails over Sparks Lake, Oregon

7 years ago
5 Demos Where Code Meets Music
5 Demos Where Code Meets Music
5 Demos Where Code Meets Music
5 Demos Where Code Meets Music
5 Demos Where Code Meets Music

5 Demos Where Code Meets Music

Latest Nat & Friends showcases a selection of web based experiments exploring sound and music (plus a couple of Google assistant easter eggs):

Music is a fun way to explore technologies like coding, VR, and machine learning. Here are a few musical demos and experiments that you can play with – created by musicians, coders, and some friends at Google.

More Here

7 years ago

Interesting Papers for Week 28, 2017

When do correlations increase with firing rates in recurrent networks? Barreiro, A. K., & Ly, C. (2017). PLOS Computational Biology, 13(4), e1005506.

Consequences of the Oculomotor Cycle for the Dynamics of Perception. Boi, M., Poletti, M., Victor, J. D., & Rucci, M. (2017). Current Biology, 27(9), 1268–1277.

The Head-Direction Signal Plays a Functional Role as a Neural Compass during Navigation. Butler, W. N., Smith, K. S., van der Meer, M. A. A., & Taube, J. S. (2017). Current Biology, 27(9), 1259–1267.

Predicting explorative motor learning using decision-making and motor noise. Chen, X., Mohr, K., & Galea, J. M. (2017). PLOS Computational Biology, 13(4), e1005503.

Feedback Synthesizes Neural Codes for Motion. Clarke, S. E., & Maler, L. (2017). Current Biology, 27(9), 1356–1361.

Direct Brain Stimulation Modulates Encoding States and Memory Performance in Humans. Ezzyat, Y., Kragel, J. E., Burke, J. F., Levy, D. F., Lyalenko, A., Wanda, P., … Pedisich, I. (2017). Current Biology, 27(9), 1251–1258.

A map of abstract relational knowledge in the human hippocampal–entorhinal cortex. Garvert, M. M., Dolan, R. J., & Behrens, T. E. (2017). eLife, 6(e17086).

Sequential sensory and decision processing in posterior parietal cortex. Ibos, G., & Freedman, D. J. (2017). eLife, 6(e23743).

Active Dentate Granule Cells Encode Experience to Promote the Addition of Adult-Born Hippocampal Neurons. Kirschen, G. W., Shen, J., Tian, M., Schroeder, B., Wang, J., Man, G., … Ge, S. (2017). Journal of Neuroscience, 37(18), 4661–4678.

Subsampling scaling. Levina, A., & Priesemann, V. (2017). Nature Communications, 8, 15140.

Noise-enhanced coding in phasic neuron spike trains. Ly, C., & Doiron, B. (2017). PLOS ONE, 12(5), e0176963.

Spatial working memory alters the efficacy of input to visual cortex. Merrikhi, Y., Clark, K., Albarran, E., Parsa, M., Zirnsak, M., Moore, T., & Noudoost, B. (2017). Nature Communications, 8, 15041.

Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Meyniel, F., & Dehaene, S. (2017). Proceedings of the National Academy of Sciences of the United States of America, 114(19), E3859–E3868.

Statistical learning in social action contexts. Monroy, C., Meyer, M., Gerson, S., & Hunnius, S. (2017). PLOS ONE, 12(5), e0177261.

Saccadic eye movements impose a natural bottleneck on visual short-term memory. Ohl, S., & Rolfs, M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(5), 736–748.

Correlates of Perceptual Orientation Biases in Human Primary Visual Cortex. Patten, M. L., Mannion, D. J., & Clifford, C. W. G. (2017). Journal of Neuroscience, 37(18), 4744–4750.

Medial Entorhinal Cortex Selectively Supports Temporal Coding by Hippocampal Neurons. Robinson, N. T. M., Priestley, J. B., Rueckemann, J. W., Garcia, A. D., Smeglin, V. A., Marino, F. A., & Eichenbaum, H. (2017). Neuron, 94(3), 677–688.e6.

Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. Schwalger, T., Deger, M., & Gerstner, W. (2017). PLOS Computational Biology, 13(4), e1005507.

Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina. Tien, N.-W., Soto, F., & Kerschensteiner, D. (2017). Neuron, 94(3), 656–665.e4.

Robust information propagation through noisy neural circuits. Zylberberg, J., Pouget, A., Latham, P. E., & Shea-Brown, E. (2017). PLOS Computational Biology, 13(4), e1005497.

7 years ago
I…I Think I’m In Love?

I…I think I’m in love?

7 years ago
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”
“ Promise You Won’t Leave Without Me.”

“ Promise you won’t leave without me.”

  • lucykilljoy
    lucykilljoy liked this · 6 years ago
  • bohemianbulsara
    bohemianbulsara reblogged this · 6 years ago
  • bohemianbulsara
    bohemianbulsara liked this · 6 years ago
  • cyarskj1899
    cyarskj1899 liked this · 6 years ago
  • crystalroca
    crystalroca liked this · 6 years ago
  • 2000ruben
    2000ruben liked this · 7 years ago
  • gutsy-galaxy
    gutsy-galaxy liked this · 7 years ago
  • ajongchowyun
    ajongchowyun liked this · 7 years ago
  • yacklemore
    yacklemore reblogged this · 7 years ago
  • rumknobber
    rumknobber liked this · 7 years ago
  • artiselbp
    artiselbp liked this · 7 years ago
  • neukkiiim
    neukkiiim reblogged this · 7 years ago
  • lindaknowsnothing
    lindaknowsnothing liked this · 7 years ago
  • minhoyoudidnt
    minhoyoudidnt liked this · 7 years ago
  • masquerades-and-binge-reading
    masquerades-and-binge-reading liked this · 7 years ago
  • dancingonapinetree
    dancingonapinetree reblogged this · 7 years ago
  • zimmerdouche
    zimmerdouche liked this · 7 years ago
  • woofcrumpet
    woofcrumpet reblogged this · 7 years ago
  • wait-shia-surprize
    wait-shia-surprize reblogged this · 7 years ago
  • aetheriumcrystals
    aetheriumcrystals liked this · 7 years ago
  • apricottages-deactivated
    apricottages-deactivated liked this · 7 years ago
  • witchloved
    witchloved liked this · 7 years ago
  • biggesttrashiespolarbearqueen
    biggesttrashiespolarbearqueen reblogged this · 7 years ago
  • biggesttrashiespolarbearqueen
    biggesttrashiespolarbearqueen liked this · 7 years ago
  • wheres-cas
    wheres-cas liked this · 7 years ago
  • dracula-enthusiast
    dracula-enthusiast liked this · 7 years ago
  • mori-sempai
    mori-sempai reblogged this · 7 years ago
  • ghostconch
    ghostconch reblogged this · 7 years ago
  • ghostconch
    ghostconch liked this · 7 years ago
  • mandyrosehumiston
    mandyrosehumiston reblogged this · 7 years ago
  • bluecrimee
    bluecrimee liked this · 7 years ago
  • awkwardbookworm
    awkwardbookworm reblogged this · 7 years ago
  • itsanotherplanet
    itsanotherplanet reblogged this · 7 years ago
  • yshkry
    yshkry liked this · 7 years ago
  • onesp1cyboi
    onesp1cyboi liked this · 7 years ago
  • downzeroapp
    downzeroapp liked this · 7 years ago
  • large-matcha-green-tea
    large-matcha-green-tea liked this · 7 years ago
  • laossj
    laossj reblogged this · 7 years ago
  • laossj
    laossj liked this · 7 years ago
  • astrofurby
    astrofurby liked this · 7 years ago
  • sithwithcoffee
    sithwithcoffee liked this · 7 years ago
  • stormking2010-blog
    stormking2010-blog liked this · 7 years ago
laossj - 无标题
无标题

295 posts

Explore Tumblr Blog
Search Through Tumblr Tags